jeudi 30 juin 2016

How to mock classes in Rails controller with mocha

I'm having a hard time understanding how to use mocha mocking library for certain type of unit tests in Rails.

I have a controller, which initializes an object from a helper library, and then calls a function on it. My code looks similar to

class ObjectsController < ApplicationController
  before_action :set_adapter

  def index
    response = @adapter.get_objects

    render json: response
  end

  private
    def set_adapter
      arg = request.headers["X-ARG"]
      @adapter = Adapter::Adapter.new(arg)
    end
end

In my tests, I want mock the adapter to make sure that the get_objects() method is called. I'm trying to figure out what is the best way to implement this kind of tests, but I seem to be stuck on getting the idea on how to mock an existing object within a class.

Can anyone help me out?

Android:Robotium:have two tabhosts on screen

I have two tabhosts on screen.
The tabs view is android.R.id.tabs.
I use
ViewGroup tabs = (ViewGroup) solo.getView(android.R.id.tabs);
View viewYouWantToDoStuffWith = tabs.getChildAt(x);
solo.clickOnView(viewYouWantToDoStuffWith);
just can only control one.
How can i do this with solo?

Ruby returns nil class in testing, working fine in dev

I'm trying to implement a login system using rails rather than an external gem and have been following the Michael Hartl tutorial most apparently pop their cherry with. So far the site itself is functioning fine, it's 2 tests to do with logging in I'm struggling with:

require 'test_helper'

class UsersLoginTest < ActionDispatch::IntegrationTest

    def setup
      @user = users(:michael)
    end

    test "login with invalid information" do
      get login_path
      assert_template 'sessions/new'
      post login_path, params: { session: { email: "", password: "" } }
      assert_template 'sessions/new'
      assert_not flash.empty?
      get root_path
      assert flash.empty?
    end

    test "login with valid information" do
      get login_path
      post login_path, params: { session: { email:    @user.email,
                                      password: 'password' } }
      assert_redirected_to @user
      follow_redirect!
      assert_template 'users/show'
      assert_select "a[href=?]", login_path, count: 0
      assert_select "a[href=?]", logout_path
      assert_select "a[href=?]", user_path(@user)
    end

  end

My error messages are:

ERROR["test_login_with_invalid_information", UsersLoginTest, 2016-06-30 22:30:36 +0100]
test_login_with_invalid_information#UsersLoginTest (1467322236.17s)
  NoMethodError:         NoMethodError: undefined method `[]' for nil:NilClass
        app/controllers/sessions_controller.rb:7:in `create'
        test/integration/users_login_test.rb:12:in `block in <class:UsersLoginTest>'
    app/controllers/sessions_controller.rb:7:in `create'
    test/integration/users_login_test.rb:12:in `block in <class:UsersLoginTest>'

  ERROR["test_login_with_valid_information", UsersLoginTest, 2016-06-30 22:30:36 +0100]
   test_login_with_valid_information#UsersLoginTest (1467322236.18s)
  NoMethodError:         NoMethodError: undefined method `[]' for nil:NilClass
        app/controllers/sessions_controller.rb:7:in `create'
        test/integration/users_login_test.rb:21:in `block in       <class:UsersLoginTest>'
    app/controllers/sessions_controller.rb:7:in `create'
    test/integration/users_login_test.rb:21:in `block in       <class:UsersLoginTest>'

The error codes point to the following controller:

  class SessionsController < ApplicationController

    def new
    end

    def create
      user = User.find_by(email: params[:session][:email].downcase)
      if user && user.authenticate(params[:session][:password])
        log_in user
        redirect_to user
      else
        flash.now[:danger] = 'Invalid email/password combination'
        render 'new'
      end
    end

    def destroy
    end
  end

I assumed at the start this was due to the sessions helper methods not being available, however the site itself runs fine in development mode and logging in is possible. Am I missing something from my sessions helper or the test file itself? My sessions helper:

module SessionsHelper

  # Logs in the given user.
  def log_in(user)
    session[:user_id] = user.id
  end

  # Returns the current logged-in user (if any).
  def current_user
    @current_user ||= User.find_by(id: session[:user_id])
  end

  # Returns true if the user is logged in, false otherwise.
  def logged_in?
    !current_user.nil?
  end
end

And finally my application controller:

class ApplicationController < ActionController::Base
  # Prevent CSRF attacks by raising an exception.
  # For APIs, you may want to use :null_session instead.
  protect_from_forgery with: :exception
  include SessionsHelper
end

Apologies for the wall of text and thanks in advance

How to make an annotation where I can set a different value depending on the condition?

I want to make an annotation to use in Test Cases. I have a method that tests methods of an object, and what I want is to use an annotation "grade" with a parameter called "value", and when the condition is right I want to set 10 to the annotation or 0 if it is wrong. For example: I have a method called "test" that receives an object as attribute. Then, knowing that this object has a method "sum", I will write a condition like if(myobject.sum(2,2)==4). If this test pass then the test gets the grade 10. So, what I want is to create a annotation to grade the tests, like @grade(10,0). If the test pass, I set 10 as parameter to the annotation and if didn't I set 0. This test case will execute from another class, so I think I will use reflection to see if there are annotations in the methods or attributes and then get the values but I don't know how to implement this way, if I need to put annotations for methods or for attributes, if I have to create an atribute to put the annotation, etc. Does anybody know how can I implement this?

Why doesn't my before() hook in Mocha run at all?

I have written this following code in a file named "example.js":

console.log('HI HI HI HI');

describe('hooks', function() {
  console.log('before before');
  before(function() {
    console.log('ok');
  });
  console.log('after before');  
})

Output of the code when i run "mocha example.js" is:

HI HI HI HI
before before
after before
  0 passing (1ms)

Why didn't the "ok" get printed? I thought the before() hook runs before all the code in the describe() block?

Espresso freezing on some tests randomly

I have a couple of these pretty simple tests just to try out Espresso:

@Test(timeout = 3000)
public void testSomeButton()
{
    Espresso.onView(ViewMatchers.withId(R.id.someid)).perform(ViewActions.click());
    Matcher<Intent> intentMatcher = IntentMatchers.hasComponent(SomeActivity.class.getName());
    Intents.intended(intentMatcher);
}

The problem is, every now and then, Espresso freezes on a test. The yellow Spinner in Android Studio keeps spinning forever, and I can see that the screen on my Android device is just the default android home screen, meaning that the activity has not been launched. I also have some timeout for my test, so I guess this means that the test has not been started.

If it helps: - There are 10 tests similar to this (for different activities) in a class. - Sometimes after a couple of successful tests, it freezes on 6th one (for example), sometime it runs all of them fine

How can I get an element randomly in a dropdowlist using protractor?

I've this dropdownlist and I'm trying to get a value randomly and click on it. How can I do it? I can't use the class because there is other element with the same class. I don't have a clue.

<dropdownlist _ngcontent-lnd-30="">
<select class="form-control ng-pristine ng-valid ng-touched">
        <!--template bindings={}-->
<option value="null">Selecione um tipo de norma...</option>
<option value="5980dfc1-ed08-4e5f-bdd7-144beb2fafe3">Enunciado Orientativo</option>
<option value="e721782a-11ba-4828-ac3a-934f60652760">Instrução Normativa</option>
<option value="a4469d22-1188-467d-a78a-e385a2cc8eb9">Lei</option>
<option value="9d8ea2fd-efe9-410a-8062-f5607c56332d">Portaria</option>
<option value="8407a52d-a760-48a2-b780-ab93f5904565">Provimento</option>
<option value="8b20cc7f-6be1-43a5-a0b7-ac2fe695b14c">Resolução</option>
<option value="8fe058a8-ece3-4ef5-8f74-17255a90066f">Súmula</option>
 </select></dropdownlist>

Haskell - generate missing argument error message from either data type

I have the following haskell test code with which I want to test my argument parser for a script.

error' = let mp = runParser AllowOpts globalOptsParser ["-d", "billing"]
          opts = ParserPrefs "suffix" False False False 80
      in fst $ runP mp opts

The required arguments are,

  -d <DB name>
  --sql <SQL SELECT statement>
  --descr <Description>
  --file-path </path/to/file>

I want to test that I get the error message,

Missing: --sql <SQL SELECT statement> --descr <Description>
--file-path </path/to/file>

when I only specify "-d billing".

The above test code gives the following output if I print the result,

Left (MissingError (MultNode [MultNode [MultNode [AltNode [Leaf (Chunk {unChunk = Just --sql <SQL SELECT statement>}),MultNode []]],AltNode [Leaf (Chunk {unChunk = Just --descr <Description>}),MultNode []]],AltNode [Leaf (Chunk {unChunk = Just --file-path </path/to/file>}),MultNode []]]))

Is there a way to generate the expected error message (String) from the above result (Either data type)? Does Haskell provide an obvious function to use for this purpose as I cannot find something in the documentation and googling for examples also didn't produce any answers.

How to test arrow function in React ES6 class

I used arrow function inside of my React component to avoid binding this context, for example my component look like this;

class Comp extends Component {
   _fn1 = () => {}
   _fn2 = () => {}
   render() {
      return (<div></div>);
   }
}

How do I test _fn1 and _fn2 function in my test cases? Because these kind of function did not associated with React component itself, so when I do

 fnStub = sandbox.stub(Comp.prototype, "_fn1");

it is not going work, since _fn did not bind with Comp.prototype. Thus, how can I test those functions in React if I want to create function with arrow syntax? Thanks!

Fortran runtime error: Bad value during integer read

I'm running some molecular dynamics simulations using CHARMM and I keep running into the same error

At line 631 of file /cygdrive/c/CHARMM/source/io/psfres.src (unit = 90, file ='tdskr2v5_min_CHARMM.psf')
Fortran runtime error: Bad value during integer read

So I don't know Fortran, just a warning. But I get the error; it's expecting an integer and getting something else. Line 631 is:

 #if KEY_LONEPAIR==1
    ! Read lone pair stuff
    numlp=0
    numlph=0
    read(u,fmt05,end=45) numlpx,numlphx

My problem is I can't figure out where the "Lone pair" section of my file is. So I can't pinpoint where in my input file the bad integer read is. I was curious if anyone had some suggestions for testing, etc. to try to figure out where my error is. I've tried replacing any characters with integers and that didn't fix it, so it's gotta be a spacing error, I just don't know how to figure out where the spacing error is!

Edit: I've also been tracing back where those numplx and numphx variables come from and that isn't helping me. Some suggestions for testing to try and find the error would be greatly appreciated!

Why isn't Go 1.6.2 searching vendor/ for packages?

The structure of the project is:

.
├── glide.yaml
├── glide.lock
├── bin
├── pkg
├── src
└── vendor

I'm using Glide for dependency management, and the GOPATH is the location of my project root (absolute path resolving to . in the above tree.)

Glide appears to install dependencies correctly, however when attempting to run tests with Go 1.6.2, I don't see it even looking in the vendor/ folder before failing:

GOPATH=/home/charney/myproject go test -i ...
src/myapp/main.go:36:2: cannot find package "http://ift.tt/1salFH3" in any of:
    /usr/local/go/src/http://ift.tt/1salFH3 (from $GOROOT)
    /home/charneymyproject/src/http://ift.tt/1salFH3 (from $GOPATH)

The package it is looking for is located at /home/charneymyproject/vendor/http://ift.tt/1salFH3

how to use appium to get the context of an already running app?

I have an android app A that invokes another android app B.

I want to write test that starts app A, click a button which opens app B.

I then want to click a button in app B. Which returns the focus to app A and sends it some data.

Is it possible to get the context of app B when it's open by app A?

Usually I open an app myself and get it context from that.

like this:

AndroidDriver AndroidDriver = new AndroidDriver( "http://localhost:53761/wd/hub" , capabilitiesObj);

How to test my Request macro from within a Laravel 5 package I'm creating?

I'm currently building a Laravel package that injects a new method in Illuminate\Http\Request. The method I'm injecting has been completed and is expected to work nicely, but I also want to test it before releasing it.

My test requires me to change the request's Content-Type header, in order for me to see if the test is passing or no. So I have done the following to simulate the request:

use Orchestra\Testbench\TestCase as Orchestra;

abstract class TestCase extends Orchestra
{
    /**
     * Holds the request
     * @var Illuminate\Http\Request
     */
    protected $request;

    /**
     * Setup the test
     */
    public function setUp()
    {
        parent::setUp();

        $this->request = new Request;

        $this->request->headers->set('Content-Type', 'application/x-yaml');
    }
}

Then in my test I use the method I'm injecting into Request with $this->request->myMethod() and it's always returning false since the Content-Type header is not getting set to application/x-yaml.

/** @test */
public function it_should_do_what_i_need_it_to_do()
{
    dd($this->request->myMethod()); // Return type: boolean

    $this->assertTrue($this->request->myMethod()); // It fails!
}

How do I go on simulating the headers in a test in Laravel package development?

store results of automatic tests and show results in a web UI

I'm looking for a piece (or a set) of software that allows to store the outcome (ok/failed) of an automatic test and additional information (the test protocol to see the exact reason for a failure and the device state at the end of a test run as a compressed archive). The results should be accessible via a web UI.

I don't need fancy pie charts or colored graphs. A simple table is enough. However, the user should be able to filter for specific test runs and/or specific tests. The test runs should have a sane name (like the version of the software that was tested, not just some number).

Currently the build system includes unit tests based on cmake/ctest whose results should be included. Furthermore, integration testing will be done in the future, where the actual tests will run on embedded hardware controlled via network by a shell script or similar. The format of the test results is therefore flexible and could be something like subunit or TAP, if that helps.

I have played around with Jenkins, which is said to be great for automatic tests, but the plugins I tried to make that work don't seem to interact well. To be specific: the test results analyzer plugin doesn't show tests imported with the TAP plugin, and the names of the test runs are just a meaningless build number, although I used the Job Name Setter plugin to set a sensible job name. The filtering options are limited, too.

My somewhat uneducated guess is that I'll stumple about similar issues if I try other random tools of the same class like Jenkins.

Is anyone aware of a solution for my described testing scenario? Lightweight/open source software is preferred.

Load test is not Running in Visual Studio 2012 ultimate?

I have created Load test Project, while i try to run the Run the Load Test , it showing The error that "Method 'TestAgentDisconnected' in type 'Microsoft.VisualStudio.TestTools.LoadTest,Version=11.0.0.0,Culture=neutral,publicKeyToken=b03f5f7f11d50a3a' does not have an implementation", how do i solve this error?

Best practice of e2e testing Angular+Scala application

What is the best way to test application build with angular.js and scala (akka http) end-to-end?

Firstly I tried with Protractor, since its designed for angular. But after few test I realized that sometimes I need to clear the database.

Secondly I tried to write e2e tests in scala, using selenium as it was suggested by scalatest. (http://ift.tt/14XpMIo), and I was easily able to recreate clean db schema. But there is a problem, how to synchronize all the DOM read actions with angular processing.

Testing catch promise in Jasmine and AngularJS

I have some code like this inside my component:

p.a().then(function(x) {
  vm.x = x;
  return p.b();
}).then(function(y) {
  if (!y) {
    return $q.reject(new Error('My Error'));
  }
  vm.y = y;
  return y;
}).catch(function(error) {
  log.error(error);
});

I'm able to test the success case fine:

it('is successful', function(done) {
  spyOn(p, 'a').and.returnValue($q.resolve('x'));
  spyOn(p, 'b').and.returnValue($q.resolve('y'));

  $ctrl = $componentController('myComponent', {
    $scope: $rootScope.$new()
  });

  p.a().then(function() {
    expect($ctrl.x).toEqual('x');
    return p.b();
  }).then(function() {
    expect($ctrl.y).toEqual('y');
    done();
  });

  $timeout.flush();
});

But I am not able to test the catch and assert the error:

it('fails', function(done) {
  spyOn(p, 'a').and.returnValue($q.resolve());
  spyOn(p, 'b').and.returnValue($q.resolve());

  $ctrl = $componentController('myComponent', {
    $scope: $rootScope.$new()
  });

  p.a().then(function() {
    return p.b();
  }).catch(function(error) {
    expect(error).toEqual(new Error('My Error'));
    done();
  });

  $timeout.flush();
});

All I get when I run the tests is:

Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.

Which means the catch is not even running (I tried logging something to double check).

Any ideas what am I doing wrong?

how do I test a component

I need to test a component which basically returns a byte array after hitting a URL. The method looks like this:

@RequestMapping(value = "/{schemaType}/{schemaVersion}/{xsdFile:.+}", method = RequestMethod.GET)
@ResponseBody
public byte[] getSchema(@PathVariable("schemaType") String schemaType,
                @PathVariable("schemaVersion") String schemaVersion, @PathVariable("xsdFile") String xsdFile)
        throws Exception 
      {
     // Business logic
      }

Setting up Page objects in Protractor

I am new to protractor and making my hand dirty with all the various tips and tricks to make my code more modular and efficient enough. I create a page object for my specification file. Page object:

var mapFeedBackpage=function(){

    REPORT_ROAD=element.all(by.css("div[ng-click=\"setLocation('report_road')\"]"));
    ROAD_NEW=element.all(by.css("div[ ng-click=\"mapFeedBack.editObject= mapFeedBack.createMapObjectModel();setLocation(mapFeedBack.noMap?'road_new':'choose_location_road_new/road_new')\"]"));
    ZOOM_IN=element(by.css('div[ng-click="zoomIn()"]'));
    ROAD_NAME=element(by.model("mapFeedBack.editObject.roadName"));
    SUBMIT_ROAD=element(by.css('button[ng-click="onSubmit({reportType: reportType})"]'));
    HIGHWAY_OPTION=element(by.model("mapFeedBack.editObject[attrs.select].selected")).$("[value='string:app.road.roadType.highway']");


    };

    module.exports=mapFeedBackpage;

Now the problem is that when i write this.REPORT_ROAD in the page load file my test fails saying it cant find the REPORT_ROAD variable but when I remove this for the variable it works. Now, I am wondering why is that so? Can anyone please explain me this? I used the Page Load guide : http://ift.tt/29f0hNU

My Spec file code is as follows :

var mapFeedBackpage=require('./mapFeedBack-page.js')
describe("Map feedback Automation",function()
{

var mapFeedBack= new mapFeedBackpage();

    it("Check if the Url works ",function() //spec1
    {
        browser.get(browser.params.url);
        expect(browser.getCurrentUrl()).toContain("report");
        browser.sleep(browser.params.sleeptime);
    }); 


    it("test browser should reach report road option",function() //spec2s
    {
        REPORT_ROAD.click();
        expect(browser.getCurrentUrl()).toContain("report_road");
        browser.sleep(browser.params.sleeptime);
        browser.sleep(browser.params.sleeptime);
    });


    it("test browser should reach report road missing",function() //spec3
    {
        ROAD_NEW.click();
        expect(browser.getCurrentUrl()).toContain("choose_location_road_new/road_new");
        browser.sleep(browser.params.sleeptime);
        browser.sleep(browser.params.sleeptime);
    });


    it("test browser should zoom on map ",function() //spec4
    {


    var EC = protractor.ExpectedConditions;

    for(var i=0;i<3;i++)
    {
        var elm = ZOOM_IN;
        browser.wait(EC.elementToBeClickable(elm), 10000);
        elm.click();
        browser.sleep(browser.params.sleeptime);
    }



    }); 

    it("Should click on ok option",function() //spec5
    {
            var EC = protractor.ExpectedConditions;
        var elm = element(by.buttonText('OK'));
        browser.wait(EC.elementToBeClickable(elm), 10000);
        elm.click();

        expect(browser.getCurrentUrl()).toContain("road_new");

    }); 



it("test browser should reach report road option",function() //spec6
    {

        browser.sleep(browser.params.sleeptime);
        expect(browser.getCurrentUrl()).toContain("road_new");

    }); 



    it("should  enter a road name",function()   //spec8
    {       

     browser.sleep(browser.params.sleeptime);

     var testroadname = browser.params.testroadname;


     ROAD_NAME.sendKeys(testroadname);
    browser.sleep(browser.params.sleeptime);



    });


        it("should check the type of road is highway",function()  //spec9
    {

    HIGHWAY_OPTION.click();
});


        it("should  submmit the map feedback",function()  //spec10
    {       

    SUBMIT_ROAD.click();
    browser.sleep(browser.params.sleeptime);
    });





});

Jenkins test result parsing

I've got a Jenkins with a lot of jobs. These jobs do tests and produce test outputs in xml. Those xml test results look - pretty standard - like this:

<testsuites name="testsuitesname">
  <testsuite name="testsuitename">
    <testcase classname="classname" name="testcasename">
      blabla
    </testcase>
  </testsuite>
</testsuites>

When you use the option "Publish JUnit testwhatever" the structure will be like this:

(root) > classname > testcasename

There is no "testsuitesname" and no "testsuitename" in this hierarchy.

Is there any option either

a) add the testsuitesname and testsuitename to the hierarchy? ((root) > testsuitesname > testsuitename > classname > testclassname)

or b) somehow add more hierarchy to the "testcase" tag?

Because my test results are quite big I want to add more hierarchy/structure to it and not just two levels like it is now to gain more overview over all of those results.

Can anybody help me or had a similar problem?

Best

Andy

How to get Facebook Android App package and Activity name?

I wants to automate facebook android app but i am not able to get information regarding facebook app activity name. Let me know how to get the activity name of facebook application.

Run TestNg tests in Jetbrains Idea

I use comunity Idea(2016.1.3) and I'm trying to run TestNg tests. as a result I get such kind of exception: Exception in thread "main" java.lang.NoSuchMethodError: org.testng.IDEARemoteTestNG.configure(Lorg/testng/CommandLineArgs;)V

here is a screenshot:

enter image description here

But when I run test via console then everything fine. I use maven as build tool.

In some articles people suggest to add Maven Surefire Plugin. This plugin have been already added in pom file.

I suppose that may be I need to add extra values into run configuration for this test. Right now it looks like this:

enter image description here

Thanks in advance.

mercredi 29 juin 2016

What are the Frequently facing problems while automating Salesforce application using selenium webdriver?

What are the Frequently facing problems while automating Salesforce application using selenium webdriver ?

how can i test my xpath has identified right element on android/iOS native application

Android User Name field Details: content-desc: type: android.widget.EditText text: Username index: 0 enabled: true location: {30, 293} size: {480, 60} checkable: false checked: false focusable: true clickable: true long-clickable: true package: com.senrysa.parkingplace password: false resource-id: com.senrysa.parkingplace:id/LoginUserName scrollable: false selected: false xpath: //android.widget.LinearLayout[1]/android.widget.FrameLayout[1]/android.widget.RelativeLayout[1]/android.widget.ScrollView[1]/android.widget.RelativeLayout[1]/android.widget.LinearLayout[2]/android.widget.EditText[1]

I have created partial xpath with attribute text ... i.e. xpath = "//android.widget.EditText[@text='Username']")

but it has not worked.

how can i test my xpath has identified right element on android/iOS native application?

In Sinatra, how to test a route that can return both html and json?

I am still new to Sinatra and I have built my app all based in json, with no views. Now I would like to have the same behaviour but rendering the results on a view. When I was just returning json, my tests all worked. Now I am trying to introduce the erb templates in the routes, and my tests crash, and the variables in the route method don't get passed to the view either.

What am I doing wrong?

Here is the code of the tests:

main_spec.rb:

describe "when a player joins the game" do

  it "welcomes the player" do
    post "/join", "data" => '{"name": "Jon"}'
    response = {status: Messages::JOIN_SUCCESS}
    expect_response_to_eq(response)
  end

  it "sends an error message if no name is sent" do
    post "/join", "data" => '{}'
    response = {status: Messages::JOIN_FAILURE}
    expect_response_to_eq(response)
  end

  it "sends an error message if player could not join the game" do
    fill_the_game
    post "/join", "data" => '{"name": "Jon"}'
    response = {status: Messages::JOIN_FAILURE}
    expect_response_to_eq(response)
  end

  it "returns an empty response if no data is sent" do
    post "/join"
    expect_response_to_eq({})
  end

  def expect_response_to_eq(response)
    expect(last_response).to be_ok
    expect(JSON.parse(last_response.body, symbolize_names: true)).to eq(response)
  end

  def fill_the_game
    server.join_game("Jane")
    server.join_game("Joe")
    server.join_game("Moe")
    server.join_game("May")
  end

end

where Messages is a module that contains string messages for the game.

My controller initially looked like this, it just returned the response in json format:

main.rb

post "/join" do
  response = helper.join_response(params)
  @title   = Messages::JOIN_TITLE
  response.to_json
end

The helper is a class where I extracted all the business logic so that the controller only has to deal with HTTP requests. I use dependency injection to pass the helper to the main controller, so that it is easier to test.

So up to here, if I run the tests, they are green. But now I want to render the results of the response in the views through erb, while still returning the json. So I added a test like this:

main_spec.rb:

  it "renders the join page" do
    h = {'Content-Type' => 'text/html'}
    post "/join", "data" => '{"name": "Jon"}', "headers" => h
    expect(last_response).to be_ok
    expect(last_response.body).to include(Messages::JOIN_TITLE)
  end

And then modified the join router to make the test pass:

main.rb:

post "/join", :provides => ['html', 'json'] do
  response  = helper.join_response(params)
  @title    = Messages::JOIN_TITLE
  @r_status = response[:status]

  respond_to do |format|
    format.html { erb :join }
    format.json { response.to_json }
  end

  response.to_json
end

This broke all my tests. So I tried something else:

main.rb:

post "/join", :provides => ['html', 'json'] do
  response  = helper.join_response(params)
  @title    = Messages::JOIN_TITLE
  @r_status = response[:status]

  request.accept.each do |type|
    case type.to_s
    when 'text/html'
      halt erb :join
    when 'text/json'
      halt response.to_json
    end
  end

end

Brokes everything as well.

If I add a line at the end, response.to_json, just before closing the method, my tests pass except for the last line expect(last_response.body).to include(Messages::JOIN_TITLE). Indeed when I load the page in a browser, the @title seems to be sent to the page but not the @r_status for some reason. In the erb view, I have <p><%= @r_status %></p>, so it should show up. The title is rendered in the layout erb, as <h1><%= @title %></h1>.

I have printed the value of @r_status and it is correct, but if I print stuff from inside the when blocks, it's like it never hits those.

What is it that I am doing wrong? Why is the @r_status not rendered in the view and why aren't the when blocks hit?

GREYAction not working when a model dialog shows up

I use some Maps API in my app. However, sometimes it prompts up with a model dialog asking for permission, which in turns breaks my automatic testing with EarlGrey.

How can I deal with the system model dialog case?

Thanks.

Catch test framework issue: cannot use Catch::Session()

I get this error in a C++ file where I am writing some tests:

error: no member named 'Session' in namespace 'Catch'
        testResult = Catch::Session().run(test_argc, test_argv);
                     ~~~~~~~^

Looking at the catch.hpp single header file, I noticed that the code that should implement the Session() member function is greyed out, probably because of an #ifdef somewhere, which I cannot find.

Is there any macro to set to use the Session class?

Catch versions: 1.5.3 and 1.5.6.

Reference: http://ift.tt/1puB98D

Rails tests, fixture & class loading order

I have an ActiveRecord class which has scopes dynamically added based on content of another table.

(simplified code example)

class Thing < ActiveRecord::Base

  Feature.all.each do |feature|
    scope feature.name, ->{joins(:feature).where("feature.name = #{feature.name})}
  end
end

Another part of the UI of the app is a "Thing display" that lets you search and apply these feature scopes as part of the search.

This all works fine in the running app.

In tests, however, it seems success depends on what order things are loaded in. Sometimes Thing has the expected scopes, other times not. I thought I could force the Feature class and fixtures to load first by calling Feature.count before any tests run, but this doesn't work.

Is there some way to force the loading of class/fixtures to resolve this?

Different resource files for integration testing an asp.net application

My asp.net application uses a resource file to point to some REST api endpoints. The apps behavior changes depending on the amount of data it gets back from those services.

I'd like to perform integration testing on my app but I'd like to use different resource files that have custom api endpoints depending on the scenario I'd like to check against. For instance, I'd like to be able to test the integration of my app if the end points return nothing, one item, or many items.

In my ninject bindings I have

var appSettings = StreamDeserializer.DeserializeFileFromResource<AppStartSettings>(Resources.appsettings);

Is there a way I can configure specflow to rebuild my application with a different resource file depending on the integration test scenario?

Expect deep property to have any of multiple values

In Chai assertion library, we can assert a deep property to exist and have a value:

expect(obj).to.have.deep.property("field1.field2", 1);

But, what if we need to assert this property to have one of multiple values? In this case, the test should pass if obj has a field1.field2 property that has 0 or 1 or 2 value.


FYI, I need this to check that a ESLint plugin ships with a recommended rules configuration that has a "warning level" configured for every rule. Warning level can be of 0, 1 or 2 values.

Using Mocha to test higher order components in React

I am using a HOC in a React component that looks like:

import React from 'react';
import Wrapper from 'wrapper';
class Component extends React.Component {
  render () {
    return (
      <div className='component' />
    )
  };
}
export default Wrapper(Component)

When testing Component using Mocha I am attempting to look for a class name that should be contained within Component. Like so:

describe('Component', function () {
  it('can be mounted with the required class', function () {
    const component = shallow(
      <Component />
    );
    expect(component).to.have.className('component');
  });
});

The problem is that Mocha doesn't know to look within the wrapper for the Component and attempts to find it in the HOC. Which of course it will not.

The error I am receiving is:

AssertionError: expected <Wrapper(Component) /> to have a 'component' class, but it has undefined
     HTML:

     <div class="component">
     </div>

How do I tell Mocha to look within the HOC for the correct location of the class name instead of the HOC itself?

Stuck at an infinite loop unless printing result

I am trying to test a use case where I need to launch two threads, but the second one needs to wait for a particular state to happen.

The first thread launches the resolution process of a tournament in order to calculate its schedules. The second thread stops the resolution process.

@Test
public void stopResolutionProcessTest() throws InterruptedException {
    TournamentSolver solver = tournament.getSolver();

    Thread solveThread = new Thread(tournament::solve);
    solveThread.start();

    while (solver.getResolutionState() != TournamentSolver.ResolutionState.COMPUTING);

    Thread stopThread = new Thread(solver::stopResolutionProcess);
    stopThread.start();

    solveThread.join();
    stopThread.join();

    assertEquals(TournamentSolver.ResolutionState.INCOMPLETE, solver.getResolutionState());
}

The main thread gets stuck in the while loop, as if it were infinite.

However, if I just print the resolution state inside the loop, the test runs as expected:

while (solver.getResolutionState() != TournamentSolver.ResolutionState.COMPUTING)
    System.out.println(solver.getResolutionState());

I have no explanation for this; I feel like I am back several years in the past when I was studying concurrency and unexpected things I couldn't explain happened.

Could anybody shed some light on what's going on?

Edit: alright I set my getResolutionState() as synchronized but I still don't understand why this helps or why printing the state would render the described results.

Testing using Java in eclips

I want to verify whether search functionality is working properly in a website using Java.So I want to type "something" in search box and press enter key. How to write test automation for that part? :(

How do I get the URI of a JavaFileObject in annotation testing?

I'm using XSLT to generate Java Code with the javax.annotation.processing API. It works well as long as I don't want to test it, which I of course do :) While running the processor, the JavaFileObject.toUri() gives me a "file:/"-URI, which I can use for creating a StreamResult for the Transformer.

This is not the case when testing the class I get, for example /CLASS_OUTPUT/com/example.MyClass.class

JavaFileObject jfo = filer.createSourceFile(className);
String uri = jfo.toUri().toASCIIString();
if (!uri.startsWith("file:/")) { // is test!
     messager.printMessage(Diagnostic.Kind.NOTE, "Not a file URI, we're testing!");
    return;
}
StreamResult result = new StreamResult(uri);
DOMSource src = new DOMSource(modelDoc);
DOMSource xslSource = new DOMSource(loadDocument(xslUri));
TransformerFactory tf = new net.sf.saxon.TransformerFactoryImpl();
Transformer transformer = tf.newTransformer(xslSource);
transformer.transform(src, result);

I have the following questions:

  1. Does the test create a real source file?
  2. If yes, what is its actual location?

Thanks!

When running Nightwatch.js test how can I get the name of browser currently running the tests?

Situation: We are running tests in several browsers using Nightwatch
(via Saucelabs; everything runs fine on Saucelabs).

Desired: we want to know which browser the test is currently running in so we can save screenshots including the browser name.

Is it possible to determine which browser is running the tests?

Powershell Script running appropriately on one machine but not on the rest

I am currently working with the below script below and trying to get it to run on several machines.

I have ran it perfectly fine on 2 machines however when running on another 3 I am getting a lot of output paths to almost everything not even being called for by -Include.

Stomped since it works fine for 2 machines but not for the rest.

Get-ChildItem B:\, C:\, E:\, F:\, G:\, H:\, I:\, J:\, K:\, L:\, M:\, Q:\ -Force `
    -Include "*.pdf", "*.xls",".txt", ".html", "*.txt" -Recurse -EA Silentlycontinue |
    Foreach-Object { $_.Fullname } |
    Out-File "C:\LOGS\PATHFINDER$(Get-Date -f yyyy-MM-dd-HHmmss).txt" -Width 1024

How to get a debug view in eclipse?

If I want to get a view by entering the ID using this code,so from where I can get the MyView.ID?

IViewPart part = PlatformUI.getWorkbench().getActiveWorkbenchWindow().getActivePage()
        .findView(MyView.ID);
    if (part instanceof MyView) {
        MyView view = (MyView) part;

Protractor.js test case for Horizontal Scrolling and vertical Scrolling in angular2

How to write a test case to identify whether a view has horizontal or vertical scrolling through Protactor for Angularjs2?

Protractor - How to store the value of browser.execute script in varriable?

I am trying to store the value of browser.executeScript inside a local variable in my it block but I am not able to do so in all the cases it displays null.

I have tried many ways so far

     browser.executeScript('$("#txtName").css("border-left-color");').then(function (color) {
        console.log("This is color" + color);
    });

Also this

function returnColor()
{
     var  a = browser.executeScript('$("#txtName").css("border-left-color");');
     return a;
}

function getColorCode()
{
       var a = returnColor().then(function(list){
           console.log("Output is ***************" + list);
             return list;
      });

        return a;
}

I am using this inside my spec as

   iit('', function() {        

             browser.executeScript('$("#txtName").css("border-left-color");').then(function (color) {
                console.log("This is color" + color);
            });

            returnColor();


        });

Will really appreaciate it someone can tell me how to do it properly?

How can you unit test Leaflet JS maps?

How can you unit test Leaflet JS maps?

How do I test custom token authentication in Symfony2

We have a JSON API (not exactly RESTful) where we authenticate our users by two request parameters - login and password. For unrelated reasons we need no sessions nor API tokens, just login with each API call.

Already implemented Firewall listener and Authentication provider to authenticate the right way and now I want to test it to assure it.

I stumbled upon writing my functional test for my authentication. To put it simple, i want to do request like this

$client->request('POST', self::TEST_METHOD, array(
    'login' => 'test_user',
    'password' => 'aaa123'
));

And then test my response like this

self::assertNotEquals(
    Response::HTTP_FORBIDDEN,
    $response->getStatusCode(),
    'Returns http code Forbidden (403) on successfull login'
);

But for some reason I always get 403-Forbidden. After some trying it seems like my own custom Firewall listener is not registered.

I register my Firewall listener by Firewall Listener Factory in my AppBundle::build() method

public function build(ContainerBuilder $container)
{
    parent::build($container);

    $extension = $container->getExtension('security');
    $extension->addSecurityListenerFactory(new ApiOperatorFactory());
}

I am nearly sure I am not understanding something. I tried to search for solutions all over the Internet but found nothing. As if I am asking wrong question. Can somebody help me please? I can provide any other source code, but I think problem might be elsewhere.

My authentication works perfectly by its own, so there should not be problem in authentication itself. Just cant test it...

Running Android junit test cases for Fragments using Robotium

Checked many sources but couldn't find the right solution. I am working on automated testing for one of my applications and am able to write the test cases for activities. But one of the activity have many fragments. I want to run automatic tests for these fragments. Seems like we cannot do it same as we do it for activity. So can anyone help me or provide me with a sample on how to do it? Any help appreciated.

Sinon stub error with localstorage in karma tests (es6 + jspm)

I am trying to stub setItem and getItem methods of window.localStorage and I am experimenting the issues as it can be seen in the screenshot:

enter image description here

The point is I don't know what happen to the window.localStorage object which seems to be different depending on the property. When the time comes to stub the setItem method, I get

Testing storage services LocalStorage "before each" hook for "should return current name":
     TypeError: Attempted to wrap string property setItem as function
      at checkWrappedMethod (base/node_modules/sinon/pkg/sinon.js:1355:29)
      at Object.wrapMethod (base/node_modules/sinon/pkg/sinon.js:1398:21)
      at Object.stub (base/node_modules/sinon/pkg/sinon.js:3465:26)
      at Context.eval (client/js/common/services/storage/localstorage.spec.js!transpiled:30:14)
      at Object.invoke (base/client/jspm_packages/github/angular/bower-angular@1.5.1/angular.js:4628:19)
      at Context.workFn (base/client/jspm_packages/npm/angular-mocks@1.4.8/angular-mocks.js:2441:20)
      at window.inject.angular.mock.inject (base/client/jspm_packages/npm/angular-mocks@1.4.8/angular-mocks.js:2413:37)
      at Context.eval (client/js/common/services/storage/localstorage.spec.js!transpiled:20:7) Error: Declaration Location
      at window.inject.angular.mock.inject (base/client/jspm_packages/npm/angular-mocks@1.4.8/angular-mocks.js:2412:25)
      at Context.eval (client/js/common/services/storage/localstorage.spec.js!transpiled:20:7)

I am using jspm as a client package/module manager. So, the karma config file is:

basePath: './',

    // frameworks to use
    // available frameworks: http://ift.tt/1ft83uu
    frameworks: ['jspm', 'mocha', 'chai-as-promised', 'chai', 'sinon'],

    // start these browsers
    // available browser launchers: http://ift.tt/1ft83KU
    browsers: ['Chrome'],

    // test results reporter to use
    // possible values: 'dots', 'progress'
    // available reporters: http://ift.tt/1ft83KQ
    reporters: ['mocha'],

    // Continuous Integration mode
    // if true, Karma captures browsers, runs the tests and exits
    singleRun: true,

    // enable / disable colors in the output (reporters and logs)
    colors: true,

    // list of files / patterns to load in the browser
    files: [],

    jspm: {
      // Edit this to your needs
      config: 'jspm.config.js',
      packages: 'client/jspm_packages',
      loadFiles: [
        'client/js/common/services/**/*.spec.js'
      ], 
      serveFiles: [
        'client/js/**/*.js',
        'client/js/**/*.html',
        'client/js/**/*.css'
      ],
      paths: {
        'github:*': 'base/client/jspm_packages/github/*',
        'npm:*': 'base/client/jspm_packages/npm/*',
        'js/*': 'base/client/js/*'
      },
      urlRoot: './'
    },

    proxies: {
      '/client': '/base/client'
    },

    // list of files to exclude
    exclude: [],

    // level of logging
    // possible values: config.LOG_DISABLE || config.LOG_ERROR || config.LOG_WARN || config.LOG_INFO || config.LOG_DEBUG
    logLevel: config.LOG_INFO,
    client: {
      captureConsole: true,
      mocha: {
        bail: false,
        // require: 'should'
        reporter: 'spec',
        ui: 'bdd'
      }
    }

The only thing I can say is if someone have had the same issue and/or can provide some hint to come up with a solution :-/

Thanks!!!

Appium TestNG Running Isolated Test vs. Fixture

I'm not sure how I should organize my test classes in TestNG for Appium. I currently have a @Factory that instantiates an instance for each locally attached devices (with parallel="instances").

Normally, when I run a test case, I want that test case to be self-contained. In the context of Appium, this doesn't make sense to me. I figured that I could essentially use a class to create a common state, and then use dependsOnMethods and dependsOnGroups to control execution of all methods that follow setup.

I know that I can use listeners to retry a test case and repeat the setup in case that a test fails for something that isn't an Assert. This approach makes sense to me in that it would save a lot of time (restarting the app is quite a bit of overhead), while also allowing you to isolate tests in the case that one of the dependencies messes up.

I figured that this approach would make creating tests cases more difficult, but that could probably be solved by making this approach optional (vs. the default of running isolated tests).

Are there other down-sides for taking this approach? Also, if having dependent test methods is discouraged, what is the TestNG dependsOnMethods normally used for?

HTTPS test server that checks client certificates

I have written a web service client that uses SSL client certificates to authenticate to the remote server. But since the actual web service is not yet available to me, I'm looking for a public test server that accepts a client certificate for authentication, so that I can test the SSL part of my client for correct implementation and configuration.

I have tried https://requestb.in but it replies with HTTP status 403 (Forbidden) when I use a client certificate. And https://httpbin.org/ accepts my request but doesn't give any indication if the certificate was usable.

Is there a similar service that checks the SSL client certificate?

How to get header values in laravel testing

This is how i get response in test Case

$response = $this->call('POST','/api/auth/login',['username'=>'xx','password'=>'xxx'], [/* cookies */], [/* files */], ['HTTP_ClientSecret' => 'xxxx']);

Then we can get response content by like this

$response->getContents()

i want to know how to get response header data ?

Unit testing and global testing for custom CMS

At the company, we are writing a micro-CMS from scratch, in PHP + MySQL. It allows to create a multidomain + multilanguage + e-commerce platform, with administration part. As we are not using any framework, I wonder how could we implement some unit tests and modular test to correct any bugs that may slip through the code. The code is completely procedural. It has grown from a few files to a huge amount of code and tables and we are envisioning that it may be a nightmare to maintain.

How many Jenkins Executors can you have?

I'm running parallel tests with Jenkins.

The way I have it set up is I have a build flow job that executes three other jobs, in parallel. The three other jobs are connected to separate Test XML files.

When I initially started this I had a problem that only two jobs would execute and the third job would only execute after one of the others had finished.

I found this to be due to my Jenkins having the number of executors set to " 2 ", which is now set to " 5 ".

However as a matter of interest, just for future planning.. Does Jenkins have a cap on the amount of executors you can have? Or is there a recommended number that you shouldn't exceed? Or would it be solely down to the environment you are running it on?

If there is a cap/recommend number not to exceed I presume the best way to deal with this would be to user a master/slave scenario and spread the workload across multiple VMs?

For example if I had it set to 6 executors would this mean I would have 6 executors on each VM? Or 6 executors that are shared out between the VMs?

Thank you.

Mock Session in Spring Boot and RestAssured

I have a web application, running with Spring Boot. Now I have to write tests with Rest Assured. However, for running some of them I have to be authenticated on the server. Server uses google oauth authentication. Is there any way to mock session with rest assured? Documentation doesn't say a lot about this and ways covered there don't help.

when()
      .sessionId("id here")

On the server side I'm using HttpSession with userId parameter inside.

Error when calling qExec "no known conversion for argument 1 to QObject"

I'm trying to create tests for a c++ application with QtTest. The three relevent files that I have are: GuiTests.cpp which contains my main function, testsuite1.cpp which contains my tests and testsuite1.h which contains the definitions of my tests. I created these files with help from different guides, for example this one: http://ift.tt/294dWDx.

When I try to build I get this error:

no matching function for call to 'qExec(TestSuite1 (*)(), int&, char**&)'

no known conversion for argument 1 from 'TestSuite1 (*)()' to 'QObject*'

I don't understand why, as you can see in testsuite.h below TestSuite1 is a QObject. The funny thing is this exact code (I am pretty sure) worked before but then I fiddled around with passing argc and argv to guiTest() for a while, and after I removed argc and argv and went back to what I had before (what I currently have, please see the files below) I got this error.

I've been trying to solve this problem for a long time and I can't find any answers online, so please help me, any help is appreciated. Thanks!

GuiTests.cpp

#include "testsuite1.h"
#include <QtTest>
#include <QCoreApplication>

int main(int argc, char** argv) {
    TestSuite1 testSuite1();
    return QTest::qExec(&testSuite1, argc, argv);
}

testsuite1.h

#ifndef TESTSUIT1_H
#define TESTSUIT1_H

#include <QMainWindow>
#include <QObject>
#include <QWidget>
#include <QtTest>

class TestSuite1 : public QObject {
Q_OBJECT
public:
    TestSuite1();
    ~TestSuite1();

private slots:
    // functions executed by QtTest before and after test suite
    void initTestCase();
    void cleanupTestCase();

    // functions executed by QtTest before and after each test
    //void init();
    //void cleanup();

    // test functions
    void testSomething();
    void guiTest();
};

#endif // TESTSUIT1_H

testsuite1.cpp

#include "testsuite1.h"
#include <QtWidgets>
#include <QtCore>
#include <QtTest>

TestSuite1::TestSuite1()
{

}

TestSuite1::~TestSuite1()
{

}

void TestSuite1::initTestCase()
{

}

void TestSuite1::cleanupTestCase()
{

}

void TestSuite1::guiTest()
{
    QVERIFY(1+1 == 2);
}

void TestSuite1::testSomething()
{
    QLineEdit lineEdit;

    QTest::keyClicks(&lineEdit, "hello world");

    QCOMPARE(lineEdit.text(), QString("hello world"));

    //QVERIFY(1+1 == 2);
}

//QTEST_MAIN(TestSuite1)
//#include "TestSuite1.moc"

mardi 28 juin 2016

hadoop - Validate json data loaded into hive warehouse

I have json files, volume is approx 500 TB. I have loaded complete set into hive data warehouse.

How would I validate or test the data that was loaded into hive warehouse. What should be my testing strategy ?

Please help.

minitest-reporters does not change minitest output style

I have been trying to use the minitest-reporters gem to alter the output style of Ruby's builtin minitest testing library. However, it does not actually change the output.

It should be noted that I am not using Rails or Rake, but I didn't think that would make a difference. I am simply trying to test a Ruby command-line program that I have written.

Here's a dumb little test case (let's call it dumbtest.rb) that I was trying out:

require 'minitest/autorun'
require 'minitest/reporters'

Minitest::Reporters.use! [Minitest::Reporters::DefaultReporter.new(:color => true), Minitest::Reporters::SpecReporter.new]

describe "MiniTest demo" do
  describe "when asked about the number 2" do
    it "should be equal to the number 2" do  
      2.must_equal 2
    end 
  end 
end

When I run the test, it just produces the default minitest output (i.e. colorless, no descriptions of passing tests, etc.):

$ ruby -Ilib:test dumbtest.rb 
Run options: --seed 48983

# Running:

.

Finished in 0.001356s, 737.6595 runs/s, 737.6595 assertions/s.

1 runs, 1 assertions, 0 failures, 0 errors, 0 skips

With minitest-reporters enabled, I expect the output to look something more like this (i.e. list both passing and failing tests as opposed to just failing, the word PASS is colored green, the final summary is color-coded, etc.):

enter image description here

There are no runtime errors. It's just not working for me. Any idea why?

runtime error in ideone [duplicate]

This question already has an answer here:

import java.util.*;
import java.lang.*;
import java.io.*;

class Ideone
{
    public static void main (String[] args) throws java.lang.Exception
    {
        Scanner sc = new Scanner(System.in);
        int n =sc.nextInt();
        int arr[] = new int[n];
        for(int i = 0 ; i < n ; i++){
        arr[i] = sc.nextInt();
     }

        for(int i = 0 ; i < n ; i++){
        arr[i] = Math.abs(arr[i + 0] - arr [i + 1]);
        }
        Arrays.sort(arr);
        System.out.println(arr[arr.length-1]);



      }

I am beginner of java and trying to learn coding.I was trying to find maximum no in array element difference but while running ideone is Giving Run time Error

Switching to fixed window size increased the risk of failure

During initial implementation stage, all auto tests were running from my laptop. I used max window and never had an issue with the appearance of various elements on the screen.

Now, we got to the stage where auto test suite will be executed by more than one person, so that we have to consider difference in screen resolutions, OS (MAC/Windows), etc.

To get on the same page, we decided to use a fixed window size instead of max window:

'chromeOptions': { args: ['--no-sandbox', '--window-size=1366,768'] } },

Once fixed window size is in place, I started observing slight inconsistencies with the display. The same window could appear slightly above or below the "expected" location. For most parts, it doesn't affect outcome of test runs... but about 10% of the time it leads to failure due to "element is not visible" or "element is not clickable" error.

In summary, switching to fixed window size appeared to increase probability of a failure by 10%, which will result a lot of extraneous "noise" when everyone will start using this suite.

Is there a way to achieve the same consistency as max window method, or this is a known imperfection/limitation on protractor side?

Is there a way espresso tests will continue from the next test on app/process crash?

I have an app with 50 espresso tests. On 10th test, app crashes and rest of the tests wont execute. Is there a way we can restart app and execution start from the next test?

what does test indicator mean in ansi x12 message?

I just started my career in EDI. I just want to know what does test indicator mean in ansi x12 message. I know about usage indicator. Can anybody clarify me on my query?

Thanks in advance.

Set the cache directory for com.apple.dt.instruments

There are a number of test machines on our CI farm. I've noticed the Mac machines have started to run out of disk space. This is caused by the directory

/Library/Caches/com.apple.dt.instruments

Obviously the tests are causing this growth. Is it possible for me to redirect them to create the cache in our Jenkins workspace? i.e. So the cache will be deleted between runs.

How to test code that is using whatwg-fetch with mocha?

I'm using http://ift.tt/11gdJb1 in my app, which works fine but I would like to test my code with Mocha.

This does not work out of the box. I'm getting:

1) testApi.js Test api error handling:
 ReferenceError: fetch is not defined
  at callApi (callApi.js:10:10)
  at Context.<anonymous> (testApi.js:8:40)

Because well, fetch is not defined. When I build for the browser fetch is exposed by webpack.

I've tried using http://ift.tt/18q0Hew but the api is slightly different and wants full url's instead of relative paths for example.

Is there a solution to this problem?

Each testcase as supperate report log file with surefire

Could you please help me I have test class:

@FixMethodOrder(MethodSorters.NAME_ASCENDING)
@Category(NewServicesESB.class)
public class ConnectionTest {


 @Test
 public void checkConnection() { }

 @Test
 public void createConnection() { }



}

From now I want to have each supperate txt output report file: one for checkConnection() and other for createConnection()

I run mvn surefire-report:report command I get one entire file for all my two methods checkConnection() and createConnection()

While I use Jenkins junit report plugin in report I have The test class and inside of this class I have my wo methods and while I open one of this method I see all log from checkConnection() and createConnection()

Selenium: FindsBy with collection

I am a beginner in testing and have a question. How can I correctly use ReadOnlyCollection<IWebElement>if I use attribute FindsBy . My collection is always null after started test. Heres is my code in C#:

        [FindsBy(How = How.Name, Using = "role")]
        public ReadOnlyCollection<IWebElement> radPercentage { get; }

and here is testing web: http://ift.tt/292gZAu

I want to do something like this: radPercentage[2].Click();

Check if a classes Property or Method is declared as sealed [duplicate]

This question already has an answer here:

I've got following derivations:

interface IMyInterface
{
    string myProperty {get;}
}

class MyBaseClass : IMyInterface // Base class is defining myProperty as abstract
{
    public abstract string myProperty {get;}
}

class Myclass : MyBaseClass // Base class is defining myProperty as abstract
{
    public sealed override string myProperty 
    {
        get { return "value"; }
    }
}

I would like to be able to check if a member of a class is declared as sealed. Somewhat like that:

PropertyInfo property = typeof(Myclass).GetProperty("myProperty")

bool isSealed = property.GetMethod.IsSealed; // IsSealed does not exist

Sense of all this is to be able to run a test, that checks the code/project for consistency.

Following test fails:

PropertyInfo property = typeof(Myclass).GetProperty("myProperty")

Assert.IsFalse(property.GetMethod.IsVirtual);

Seem to Have the Wrong Content Type When POSTing with Chai-HTTP

I am looking to make use of Chai-HTTP for some testing. Naturally I want to test more than my GETs however I seem to be hitting a major roadblock when attempting to make POSTs.

In an attempt to figure out why my POSTs weren't working I began hitting them against a POST test server.

Here is a POST attempt formatted using an entirely different toolchain (Jasmine-Node and Frisby) for testing (that works just fine):

frisby.create('LOGIN')
  .post('http://ift.tt/Lz8JpE', {
    grant_type:'password',
    username:'helllo@world.com',
    password:'password'
  })
  .addHeader("Token", "text/plain")
  .expectStatus(200)
  })
.toss();

Which results in:

Time: Mon, 27 Jun 16 13:40:54 -0700
Source ip: 204.191.154.66

Headers (Some may be inserted by server)
REQUEST_URI = /post.php
QUERY_STRING = 
REQUEST_METHOD = POST
GATEWAY_INTERFACE = CGI/1.1
REMOTE_PORT = 19216
REMOTE_ADDR = 204.191.154.66
HTTP_CONNECTION = close
CONTENT_LENGTH = 64
HTTP_HOST = posttestserver.com
HTTP_TOKEN = text/plain
CONTENT_TYPE = application/x-www-form-urlencoded
UNIQUE_ID = V3GPVkBaMGUAAB1Uf04AAAAc
REQUEST_TIME_FLOAT = 1467060054.9575
REQUEST_TIME = 1467060054

Post Params:
key: 'grant_type' value: 'password'
key: 'username' value: 'hello@world.com'
key: 'password' value: 'password'
Empty post body.

Upload contains PUT data:
grant_type=password&username=hello%40world.com&password=password

And here is a POST attempt using Chai and Chai-HTTP:

describe('/post.php', function() {

  var endPointUnderTest = '/post.php';

  it('should return an auth token', function(done) {
    chai.request('http://ift.tt/15VrR98')
      .post(endPointUnderTest)
      .set('Token', 'text/plain')
      .send({
        grant_type: 'password',
        username: 'hello@world.com',
        password: 'password'
      })
      .end(function(err, res) {
        console.log(res);
        res.should.have.status(200);
        done();
      });
  });
});

Which results in:

Time: Tue, 28 Jun 16 06:55:50 -0700
Source ip: 204.191.154.66

Headers (Some may be inserted by server)
REQUEST_URI = /post.php
QUERY_STRING = 
REQUEST_METHOD = POST
GATEWAY_INTERFACE = CGI/1.1
REMOTE_PORT = 1409
REMOTE_ADDR = 204.191.154.66
HTTP_CONNECTION = close
CONTENT_LENGTH = 76
CONTENT_TYPE = application/json
HTTP_TOKEN = text/plain
HTTP_USER_AGENT = node-superagent/2.0.0
HTTP_ACCEPT_ENCODING = gzip, deflate
HTTP_HOST = posttestserver.com
UNIQUE_ID = V3KB5kBaMGUAAErPF6IAAAAF
REQUEST_TIME_FLOAT = 1467122150.9125
REQUEST_TIME = 1467122150

No Post Params.

== Begin post body ==
{"grant_type":"password","username":"hello@world.com","password":"password"}
== End post body ==

Upload contains PUT data:
{"grant_type":"password","username":"hello@world.com","password":"password"}

Notice the difference in CONTENT_TYPE, Post Params and PUT data in particular (I think this is the source of my problem).

Where Jasmine/Frisby would submit the POST using the 'application/x-www-form-urlencoded' format, Chai-HTTP seems to be using the 'application/json' format.

Am I somehow misusing Chai-HTTP's POST capabilities? Or does Chai-HTTP not allow for 'application/x-www-form-urlencoded' POST requests? I do not seem to be able to resolve this and it is the final hurdle for me to jump to make the transition to using a Mocha/Chai toolchain for my testing (which is the goal, I would prefer to not use a different library unless it's absolutely necessary).

Do you know any techniques and tools for testing Windows 8 applications?

Have any of you ever done automated testing of Windows 8 Tablet applications? Do you have any recommendations for automation tools to help with the process? Do you have any recommended approaches to doing automation with this platform? Anything I can read or look at to help me with this process?

We had talks about Selenium Webdriver and Protractor. I was going to look at Appium as well to see if that can help.

Thanks so much for your feedback!

Suggest any workaround to Enable Developer Options on iOS (preferably 9 and higher) without use of MacOS or Xcode?

Currently I am using Appium, to test mobile apps on iOS. I was wondering, if there's any work-around to enable Developer Options on iPhone and iPad by using my PC ? And please suggest any VM ware solution like Hackintosh! . If you say its not possible !! then, I would like to add that there are many applications and testing frameworks like, SeeTestAutomation, that are able to activate this option without any use of X-Code or MacOS.

Please Brain Storm, :p and help !

SeeTestAutomation: Test sends app to background

I'm currently trying out the SeeTestAutomation suite from Experitest. I want to just see what it can do in terms of automation.

Recording the steps for the test is working out fine so far, but playing them back, the test always sends the app back on the third step or so. Following that, the rest of the testscript fails understandably, because the app is in the background.

This occured on any Android device I have tested so far. I have not tried iOS.

Has anyone else had this problem?

How to Test Angular2 / TypeScript HTTPService without Mock

import {Injectable} from '@angular/core';
import {Http} from '@angular/http';

@Injectable()
export class HttpService {
  result: any;

  constructor(private http:Http) {
  }

   public postRequest(){
       return this.http.get('http://httpbin.org/get');    
  }
}

Above is my code, here is my Test:

I do not want to mock anything, just test the real http connection.

Edit - New service.spec file:

import {beforeEachProviders, beforeEach, it, describe, expect, inject} from '@angular/core/testing';
import {HttpService} from '../../providers/http-service/http-service';
import {TranslateService} from 'ng2-translate/ng2-translate';
import {Goal} from '../../providers/goal/goal';
import {NavController} from 'ionic-angular';
import {HTTP_PROVIDERS, Http} from '@angular/http';

describe('Http Service Test', () => {

      beforeEachProviders(() => {
        return [
            HTTP_PROVIDERS,
            HttpService
        ];
    });

    it('should return response when subscribed to postRequest',
        inject([HttpService], (httpService: HttpService) => {

            httpService.postRequest().subscribe((res) => {
                expect(res.text()).toBe('hello raja');
            }); 
    }));
});

These are my errors in karma console:

28 06 2016 14:33:32.067:ERROR [Chrome 51.0.2704 (Mac OS X 10.11.4) | Http Service Test | should return response when subscribed to postRequest]: TypeError: Cannot read property 'getCookie' of null
    at CookieXSRFStrategy.configureRequest (http://localhost:9876/absolute/var/folders/vy/18sb1wqs60g734bhr75cw9_r0000gn/T/9b9439f5f9c1590d3052594bcae9e877.browserify?26719cf22e6406ebc638b6b187c777666dcc5698:36568:81)
    at XHRBackend.createConnection (http://localhost:9876/absolute/var/folders/vy/18sb1wqs60g734bhr75cw9_r0000gn/T/9b9439f5f9c1590d3052594bcae9e877.browserify?26719cf22e6406ebc638b6b187c777666dcc5698:36583:28)
    at httpRequest (http://localhost:9876/absolute/var/folders/vy/18sb1wqs60g734bhr75cw9_r0000gn/T/9b9439f5f9c1590d3052594bcae9e877.browserify?26719cf22e6406ebc638b6b187c777666dcc5698:37476:20)

How to print to Jenkins' console report the messages from failed asserts?

My FullStack tests run on Jenkins and output nothing on success, otherwise the test name and the failed line. This tells nothing about what went wrong.

Is there a way to print the assertion error message on the Jenkins console?

I have a TestWatcher that already takes a screenshot. Should it also do a System.out.println(e.getMessage())?

I want it to print something like this:

java.lang.AssertionError: Page is listing a different job
Expected: <true>
     but: was <false>

Auto install setup in virtual machine

I have a build system, it makes me the build of my c# software, and then, runs a build to make my setup and deploy that setup in a repository.

Now, what I need is, after this deployment, I need this setup installed on a virtual machine or sandbox.

How can I do that? Is there any software that make this for me?

For example, after the build of setup is done, this software with my own configuration, takes my setup and install in silent mode in this VM/ sandbox, and then if I want I can run also tests.

The main idea is, whenever a build is deployed, it's automatically installed in a machine for me, to quickly see the result/ run tests, otherwise I need always, after build, open machine, install and test.

Thank you,

Testing json: html form with \Codeception\PhpBrowser

I want to get json response with two fields

{status: 'mystatus', html: '<form>...</form>'}

How can I load html from the php array ($content['html']) to PhpBrowser and use methods such as $I->submitForm(...) or $I->seeText(...) from PhpBrowserTester(Cept)?

Here I load json from my server (helper method):

$I->amOnPage('/data/add-language-dialog/');
$content = $I->getJsonContent();

How i can train and test a softmax model in itorch?

i'm a new programmer in itorch. I 've problem with "softmax trainer" i've a feature vector and labels, feature vector has size = 1x 4098 . Now i must create a new model and i must trained it. How can i do that? Can someone advice me please?

Thanks in advance.

How to check ppt slides and content inside using node protractor?

Can anybody suggest me in any good npm module to check number of ppt slides and content inside using node protractor?

failing to unlock android screen while testing

I get an error message while testing an Android app related to scren lock while testing. I took from SO posts this code to enable the unlock screen, and so implementing this method

public void callApplicationOnCreate(Application app) {
        // Unlock the screen
        KeyguardManager keyguard = (KeyguardManager) app.getSystemService(Context.KEYGUARD_SERVICE);
        keyguard.newKeyguardLock(getClass().getSimpleName()).disableKeyguard();

        // Start a wake lock
        PowerManager power = (PowerManager) app.getSystemService(Context.POWER_SERVICE);
        mWakeLock = power.newWakeLock(PowerManager.FULL_WAKE_LOCK | PowerManager.ACQUIRE_CAUSES_WAKEUP | PowerManager.ON_AFTER_RELEASE, getClass().getSimpleName());
        mWakeLock.acquire();

        super.callApplicationOnCreate(app);
    } 

Here's the error message

java.lang.RuntimeException: Waited for the root of the view hierarchy to have window focus and not be requesting layout for over 10 seconds. If you specified a non default root matcher, it may be picking a root that never takes focus. Otherwise, something is seriously wrong

Am I wrong ? Is there any other way to do that ?

Patching instead of mocking in tests

Is there a java library that allows to patch method or a class instead of mocking? Something similar to python patch http://ift.tt/29jNi9H

How do i test my gps codes

As part of designing a gps/IRNSS receiver I have written to code which will take the raw navigation data(in frame format) of gps and calculates the latitude and longitude of of the receiver. Is there any sample GPS data available in prescribed frame format of GPS so that I can test my codes. Or is there any other way to test my codes. Thanks in advance.

lundi 27 juin 2016

Looking for more codility strategies

I am looking for some general strategies for solving Codility test. There are some tips that I can come up with:

  • Read it carefully
  • Think like Mathematic (if you are good at math)
  • Find ways around the problem, sth is not difficult if you can convert the original problem to another easy problem with a little twisted. Ex: Prefix Sum
  • Find the easiest solution first, then try to improve it if you can. This helps you to show your incremental thinking to the employers/firms even if you don't get 100% overall.
  • Add more test cases: CORNER CASES(null, 0, greater, less than, complicated)

Please help me, thanks guys.

AssertionError vs. ComparisonFailure in JUnit?

I have two nearly identical tests where one throws an AssertionError and the other throws a ComparisonFailure. The only difference seems to be that the AsserationError prints the diff in the console while the ComparisonFailure lets me click to see a pop out window. They are both using the same method (assertThat(actual).isEqualTo(expected)) so I am not sure what can be triggering the different results. What is the difference between these and how do I control which one gets output?

c++ where to put constant variables private to cc file if I need them for testing

My header file looks like this:

// method.h
class Class {
    public:
        string Method(const int number);
};

My cc file looks like this

// method.cc
#include "method.h"

namespace {
    const char kImportantString[] = "Very long and important string";
}

string Class::Method(const int number) {
    [... computation which depends on kImportantString ...] 
    return some_string;
}

Now for some inputs the Method() should return kImportantString, but for other inputs it must not return kImportantString

Therefore, I would like to create a test file, which would look like this:

// method_test.cc
#include "method.h"

void Test() {
    assert(Method(1) == kImportantString);  // kImportantString is not visible
    assert(Method(2) != kImportantString);  // in this file, how to fix this?
}

But currently the problem is that kImportantString is not within the scope of method_test.cc file.

  • Adding kImportantString to method.h is not ideal, as it is not needed inside the header file.
  • Creating a separate file "utils.h" and putting just one string there seems like an overkill (although might be the best option).
  • Copying kImportantString into a test file is not ideal, because the string is quite long, and later someone might accidentally change it in one file, but not the other.

Hence, my question is:

What's the best way to make kImportantString be visible in the test file, and be invisible in as many other places as possible?

Sequence contains no elements being thrown only from AutoFixture

I use Ploeh.Autofixture with my BDD tests.

I'm trying to write a fixture for a User class which in turn uses CentralConfiguration Class. CentralConfiguration constructor looks like this.

public CentralConfiguration(IConfigurationRepository configurationRepository, ILogger logger)
{
   _logger = logger;
   _configuration = configurationRepository.Single();
   LogPropertyValues();
}

The second line in the constructor, although working fine when used by a user, throws "Sequence contains no elements" exception EVERY TIME I try to build a fixture for tests. I even tried building a Configuration object manually and using

configuration.Single().Returns(myCustomObject)

but nothing changed (actually this line started throwing the same exception).

What am I doing wrong, and how can I circumvent this issue?

Rspec reports happening before test begins

I'm trying to set up a test job on Jenkins, using a ruby test. The test works perfectly on the VM thats Jenkins is set up with, but the failure occurs when it tries to generate a report.xml document.

Finished in 1 minute 21.54 seconds (files took 2.12 seconds to load)
13 examples, 0 failures
Archiving artifacts
Recording test results
Test reports were found but none of them are new. Did tests run? 
For example, C:\Jenkins\workspace\COMPANYNAME - STAGING - SMOKE - Full Preview Test\browser\firefox\lib\spec\reports\results.xml is 7 days 1 hr old

Build step 'Publish JUnit test result report' changed build result to FAILURE
Finished: FAILURE

When I watch the VM, inside of the lib/spec/reports I actually see the report being generated with a date modified attribute of 6/20/2016. Even when I delete the .xml, it will be generated with the same date modified. If you guys need any additional info or code please comment with request.

Url validation rails

In my model I have added the following code for validation of a URL a user can enter:

validates :website, presence: true
validates :website, format: { with: URI.regexp }, if: 'website.present?'

I have written a test with the an invalid url:

http://example,nl

When I run the test, the validation says this is a valid input. I have tried it in the program it self and this is an accepted URI. Is there a way to set the URI.regex so this is an invalid URL?

Failing custom jasmine matcher

The Story:

We've been using a custom jasmine matcher to expect an element to have a hand/pointer cursor:

beforeEach(function() {
    jasmine.addMatchers({
        toHaveHandCursor: function() {
            return {
                compare: function(actual) {
                    return {
                        pass: actual.getCssValue("cursor").then(function(cursor) {
                            return cursor === "pointer";
                        })
                    };
                }
            };
        },
    });
});

It works great and makes the tests readable:

expect(queuePage.sortByButton).toHaveHandCursor();

The problem:

When the expectation fails, currently we get a completely unreadable huge chunk of red text on the console in a form:

  • Expected ElementFinder({ ptor_: Protractor({ getProcessedConfig: Function, forkNewDriverInstance: Function, restart: Function, controlFlow: Function, schedule: Function, setFileDetector: Function, getSession: Function, getCapabilities: Function, quit: Function, actions: Function, touchActions: Function, executeScript: Function, executeAsyncScript: Function, call: Function, wait: Function, sleep: Function, getWindowHandle: Function, getAllWindowHandles: Function, getPageSource: Function, close: Function, getCurrentUrl: Function, getTitle: Function, findElementInternal_: Function, findElementsInternal_ ... 10 minutes of scrolling ... , click: Function, sendKeys: Function, getTagName: Function, getCssValue: Function, getAttribute: Function, getText: Function, getSize: Function, getLocation: Function, isEnabled: Function, isSelected: Function, submit: Function, clear: Function, isDisplayed: Function, getOuterHtml: Function, getInnerHtml: Function, getId: Function, getRawId: Function, serialize: Function, takeScreenshot: Function }) to have hand cursor.

The Question:

Why is it happening? How can we improve the matcher and output a user-friendly error instead? Something like:

Expected 'auto' to be equal to 'pointer' cursor value.


From what I understand, we would need to provide a message value for a custom matcher, but I'm not completely sure how to pass an actual element's cursor CSS value into the message.

Can't mock a factory method that calls another factory method

I have a factory like this

angular.module('app')
.factory('Utils', function () {
   function one() {
   }

   function two() {
     one();
   }

   return {
     one: one,
     two: two
 });
});

In a jasmine spec I attempt to do something like this:

it('should verify', inject(function(Utils) {
  spyOn(Utils, 'one');
  Utils.two();
  expect(Utils.one).toHaveBeenCalled();
}));

However I get an error saying the spy was never called. I guess it some sort of reference issue. Any idea why I can't spyOn a factory function that is called from another function in the same factory?

Mandatory field form is getting vibrated whenever not filled (PYTHON)

I'm currently testing a web application in selenium. The issue I'm facing with involves filling standard forms - whenever the user does not fill in "Email" field and press on "Submit", "Email" field is getting vibrated for a sec to notify that this is a mandatory field.

Is there way to test this vibration in selenium?

Why do non-unique group names in TestNG tests affect other test classes? Can I use the same group names in different test classes?

I have the same group names for the methods in 2 of my test classes in TestNG, e.g.:

@Test(description = "step 4", groups = "4", dependsOnGroups = "3")

However, when I run one of these tests, the other one gets automatically included in the temporary xml file and it runs, as well. I want to keep my group names relatively simple and don't want them to be unique. Is this possible or I should make them unique?

jest-cli clean output with --watch

running a jest --watch , all works OK but after each run it clears the output ( ~ 3 sec delay present ).

So I'm not able to read the output of test.

How to disable this CLEAN?

Or what am i doing wrong?

Test is methods python

How would one test the OS methods provided in python. For example how would you test the use of os.mkdir?

def create_folder(self):
    os.mkdir("/parentFolder/newFolder")

What can be used to test this method?

This method would have test cases such as

  • Verifying the folder was created

  • Insufficient permissions to create folder

  • Insufficient space to create folder

Thanks

encoding response value to base64 and using it on another test

Im trying to do some testing using jmeter but im facing an issue trying to do some complex stuff.

I have a "login" http request test that comes back with a response which includes an auth_token. I need to add ":" at the end and encode it to base64 to use that value on the request of another tests. Ive been reading that it can be done using beanshell but I could not achieve it yet. I will appreciate if someone could give me some steps to perform this task.

Thanks

How to test Generators in Python 3. It gives error all the time

I have snippet of the code below. Which I cannot change but I need to test. What it does is it connects to the server and gets data that I need. In particular I need to insert falsely input for variable data1 on line 12 to test it. But I can't achieve it so far.

1. url = "http://localhost:8000"
2. data = urllib.request.urlopen(url)
3.
4. def get_data():
5.     yield data.read()
6.
7. #generator
8. def get_objects(in_stream):
9.     json_object = ""
10.    buffer = b''
11.    for data1 in in_stream:
12.        data = buffer + data1
...

22.for json_dict in get_objects(get_data()):
23.   print(repr(json_dict))

get_object(in_stream) must be some sort of iterable, right? So, I am trying to pass a string there:

def test_falsely(self):
    self.assertEqual(solution.get_objects("bla bla"), "blabla")

But I am getting an error:

======================================================================
FAIL: test_falsely (__main__.TestStringMethods)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test.py", line 21, in test_falsely
    self.assertEqual(solution.get_objects("bla bla"), "blabla")
AssertionError: <generator object get_objects at 0x103433308> != 'blabla'


What do I do wrong? Does anybody have idea how to test it? Thank you.

Using Selenium Webdriver Selectors in Appium ios

A project I'm on is developing a web app at the same time as an ios app (for the same thing) and I'm hoping to be able to use existing Selenium tests, but we're having trouble with selectors. Is there a selector type or attribute name that can be used for both Selenium Webdriver and Appium ios, so that I can just set a variable to either browser or app and they run and work on both. Nobody on this project has used Appium before, so we are lacking a lot in knowledge.

I tried using IDs and found that ios doesn't work with them, changed to names and found that names have been removed from appium. If possible we'd prefer to use a selector that will be the same in the browser as it is in the app.

Thanks

When you are Unit Testing, what would you test?

I am starting to write a few unit test for an application and I would like to know your experience on this area regarding what would you test: the happy path where everything works or the bad path as well where test might fails? What is the best strategy to follow here?

Note: This test are focused on PHP side that's why I have included the phpunit tag

Fail to test the rendered output after a redirect from a DELETE route - Mojolicious

I currently expand my test suite to increase the test coverage. I want to test my controller and the html output that it renders, but I found a problem by delete methods. Let me explain it in an example.

I have a route:

$r->delete('/backups/:id')
  ->to('backup#delete_backup')
  ->name('backup_delete');

that point to the following function in the controller:

sub delete_backup {
    my $self       = shift;
    my $id         = $self->param('id');


    if ( something ) {
        $self->flash( msg => "Backup id $id deleted!" );
    }
    else{
        $self->flash( msg => "Cannot delete, backup id $id not found!" );   
    }
    $self->redirect_to($self->url_for('backup_index'));
}

where the method that handles the route backup_index just displays the $msg and shows few other irrelevant data.

I want to test this method, so I write a test:

$t_logged_in->ua->max_redirects(3);
my $page = $t_logged_in->app->url_for( 'backup_delete', id => $backup_id );
$t_logged_in->delete_ok($page)
            ->status_isnt( 404, "Checking: 404 $page" )
            ->status_isnt( 500, "Checking: 500 $page" );

The test is passed. But now, I want to check if the text is correct on the web page that is shown after redirecting. So I do the following:

$t_logged_in->ua->max_redirects(3);
my $page = $t_logged_in->app->url_for( 'backup_delete', id => $backup_id );
$t_logged_in->delete_ok($page)
            ->status_isnt( 404, "Checking: 404 $page" )
            ->status_isnt( 500, "Checking: 500 $page" )
            ->content_unlike(qr/Cannot delete,/i)
            ->content_like(qr/deleted/i);

The test fails. It fails because the content is empty, so the matching is done as there were:

'' =~ /deleted/i;
'' !~ /Cannot delete,/i;

and this is of course false in both cases. Of course, in the browser, the redirects work perfectly and I see everything as designed in the test. I can change the method to POST or GET but I wanted to make the routing properly in the way an API would be designed.

Question: how to design the test such that the content can be matched after the redirect?

For those who want to dig deeper, I give links to Github.

How I write If and else functions in selenium webdriver

I am trying to write the selenium webdriver script but i don't no How I write If and else functions in selenium webdriver.

IntegrationTest with SuperTest expects 302 gets 200 in Sails.js application

I'm trying to write a simple Test for my Controller. I use this documentation from Sails.js

The UserController.test.js:

var request = require('supertest');

describe('UserController', function () {

    describe('#login()', function () {
        it('should redirect to / indexpage', function (done) {
            request(sails.hooks.http.app)
                .post('/login')
                .send({name: 'Stifflers', password: 'Mom'})
                .expect(302)
                .expect('Location', '/', done);
        });
    });

});

The relevant code from AuthController.js:

...
   // is authenticated
      res.writeHead(302, {
        'Location': "/"
       });

       res.end(); 
...

I run the test with npm test and get this error:

Error: expected 302 "Found", got 200 "OK"

When I change the .expect(302) in my test to .expect(200) I get the next error:

Error: expected "Location" header field

I have tried to do it the same like in the documentation, why doesn't it work?

TEST CONDITION vs. TEST SCENARIO (same or different?)

I have been searching for the difference between TEST CONDITION vs TEST SCENARIO, it seems that they are the same. Can anyone explain to me the difference? And can you please give me specific examples of each? I just want to understand. Thanks.

How To Automate ETL Testing without using any Automation Tool?

How to I automate the ETL Testing without using any Automation Tool

dimanche 26 juin 2016

Android Intent builder method Unit Test failing

My error is quite strange (for me at least, it might be because I'm new to testing in Android), I have a class which has a method called "intentAddBuilder", it receives a User, an Intent Action, and a Drawable and it builds an Intent with some extras and returns it. The thing is, when I run my app, it works perfect (I even have a method that logs the info after the Intent is returned), but when I rum the test doing exactly the same thing the log method does, it fails because the intent is being returned as null.

Here's my method:

public static Intent intentAddBuilder(final User user, String action, Drawable picture){
    Intent intent = new Intent(action);

    final ArrayList<ContentValues> data = new ArrayList<>();

    String fullName = user.getFirstName() + " " + user.getLastName();
    intent.putExtra(ContactsContract.Intents.Insert.NAME, fullName.trim());

    ContentValues nickname = new ContentValues();
    nickname.put(ContactsContract.CommonDataKinds.Nickname.MIMETYPE, ContactsContract.CommonDataKinds.Nickname.CONTENT_ITEM_TYPE);
    nickname.put(ContactsContract.CommonDataKinds.Nickname.NAME, user.getNickname());
    data.add(nickname);

    ByteArrayOutputStream stream = new ByteArrayOutputStream();
    Bitmap bitmap = ((BitmapDrawable) picture).getBitmap();
    bitmap.compress(Bitmap.CompressFormat.PNG, 100, stream);
    byte[] byteArray = stream.toByteArray();
    ContentValues photo = new ContentValues();
    photo.put(ContactsContract.CommonDataKinds.Photo.MIMETYPE, ContactsContract.CommonDataKinds.Photo.CONTENT_ITEM_TYPE);
    photo.put(ContactsContract.CommonDataKinds.Photo.PHOTO, byteArray);
    data.add(photo);

    for (Email email : user.getEmails()) {
        ContentValues emailValue = new ContentValues();
        emailValue.put(ContactsContract.Data.MIMETYPE, ContactsContract.CommonDataKinds.Email.CONTENT_ITEM_TYPE);
        emailValue.put(ContactsContract.CommonDataKinds.Email.ADDRESS, email.getEmail());
        data.add(emailValue);
    }

    String website = user.getWebpage();
    if (website != null && !website.equals("")){
        ContentValues websiteValue = new ContentValues();
        websiteValue.put(ContactsContract.Data.MIMETYPE, ContactsContract.CommonDataKinds.Website.CONTENT_ITEM_TYPE);
        websiteValue.put(ContactsContract.CommonDataKinds.Website.URL, website);
        data.add(websiteValue);
    }

    intent.putParcelableArrayListExtra(ContactsContract.Intents.Insert.DATA, data);

    String company = user.getCompany();
    if(company != null && !company.equals("")){
        intent.putExtra(ContactsContract.Intents.Insert.COMPANY, company);
    }

    String jobTitle = user.getJobPosition();
    if(jobTitle != null && !jobTitle.equals("")){
        intent.putExtra(ContactsContract.Intents.Insert.JOB_TITLE, jobTitle);
    }

    return intent;
}

And my test method for this is:

@Test
public void contactIntentBuilderTest() {
    User user = TestUtils.getUserToTest();
    Intent intent = User.intentAddBuilder(user,Intent.ACTION_INSERT,drawable);

    assertEquals((user.getFirstName()+" "+user.getLastName()).trim(),intent.getStringExtra(ContactsContract.Intents.Insert.NAME));

    if (user.getCompany() != null && !user.getCompany().equals("")){
        assertEquals(user.getCompany(),intent.getStringExtra(ContactsContract.Intents.Insert.COMPANY));
    }
    if (user.getJobPosition() != null && !user.getJobPosition().equals("")){
        assertEquals(user.getJobPosition(),intent.getStringExtra(ContactsContract.Intents.Insert.JOB_TITLE));
    }

    ArrayList<ContentValues> data = intent.getParcelableArrayListExtra(ContactsContract.Intents.Insert.DATA);
    int position = 0;
    assertEquals(user.getNickname(),data.get(position).get(ContactsContract.CommonDataKinds.Nickname.NAME));
    position++;

    byte[] b =(byte[]) data.get(position).get(ContactsContract.CommonDataKinds.Photo.PHOTO);
    ByteArrayInputStream is = new ByteArrayInputStream(b);
    Drawable drw = Drawable.createFromStream(is,null);
    assertEquals(drawable,drw);
    position++;

    int positionBeforeEmails = position;
    for (int j=position; j < positionBeforeEmails + user.getEmails().size(); j++){
        Email email = user.getEmails().get(position-user.getEmails().size());
        assertEquals(email.getEmail(),data.get(j).get(ContactsContract.CommonDataKinds.Email.ADDRESS));
        position++;
    }

    if (user.getWebpage() != null && !user.getWebpage().equals("")){
        assertEquals(user.getWebpage(),data.get(position).get(ContactsContract.CommonDataKinds.Website.URL));
    }

}

I might be doing something wrong since I'm new to this, but please can anyone give me an answer?

Thaaanks!

Basic but proper use of beforeEach() or afterEach() with mocha.js and chai.js

I want to use mocha/chai to test code related to binary search trees. Here, I am testing the public insert method. I want to use beforeEach() and/or afterEach() hooks to reset the test environment prior to each it() statement so that I don't have to completely repeat the basics. However, I keep getting various errors.

Spec

describe("BinarySearchTree insert function", function() {

  beforeEach(function() {
    var binarySearchTree = new BinarySearchTree();
    binarySearchTree.insert(5);
  });

  it("creates a root node with value equal to the first inserted value", function () {
    expect(binarySearchTree.root.value).to.equal(5);
  });

  it("has a size equal to the amount of inserted values", function () {
    binarySearchTree.insert(3);
    expect(binarySearchTree.size).to.equal(2);
  });

  it("returns an error for non-unique values", function () {
    binarySearchTree.insert(3);
    expect(binarySearchTree.insert(3)).to.throw(String);
  });

  it("if inserted value is larger than current node, make or descend to rightChild", function () {
    binarySearchTree.insert(3);
    binarySearchTree.insert(10);
    binarySearchTree.insert(7);
    expect(binarySearchTree.root.rightChild.value).to.equal(10);
  });

});

Error: ReferenceError: binarySearchTree is not defined

In truth, I expected errors before there is no afterEach() resetting the test environment, not because binarySearchTree is not defined. I'd like to accomplish this, if at all possible, with only Mocha and Chai (and not other packages like Sinon, etc).

Tested Code

exports.Node = Node;

function Node(value) {
  this.value = value;
  this.leftChild = null;
  this.rightChild = null;
}

exports.BinarySearchTree = BinarySearchTree;

function BinarySearchTree() {
  this.root = null;
  this.size = 0;
}

BinarySearchTree.prototype.insert = function(value) {
  // 1) when root node is already instantiated
  if (this.root === null) {
    // tree is empty
    this.root = new Node(value);
    this.size++;
  } else {
  // 2) nodes are already inserted
    var findAndInsert = function (currentNode) {
      if (value === currentNode.value) {
        throw new Error('must be a unique value');
      }
      // base case
      if (value > currentNode.value) {
        // belongs in rightChild
        if (currentNode.rightChild === null) {
          currentNode.rightChild = new Node(value);
        } else {
          findAndInsert(currentNode.rightChild);
        }
      } else if (value < currentNode.value) {
        // belongs in leftChild
        if (currentNode.leftChild === null) {
          currentNode.leftChild = new Node(value);
        } else {
          findAndInsert(currentNode.leftChild);
        }
      }
    };
    findAndInsert(this.root);
    this.size++;
  }
};

Bonus question... I'm not sure if I am properly testing for the thrown error (when a non-unique value is inserted)?