dimanche 31 janvier 2016

Disable particular tests through Annotation Transformer : TestNG

I have a huge project with numerous test cases. Some test cases are suppose to work on only particular environments and some are not. So I'm trying to skip or disable tests which don't belong to that environment.

I'm using Annotation Transformers to override @Test 's behaviour.

Here is my Transformer code in

package com.raghu.listener

public class SkipTestsTransformer implements IAnnotationTransformer {
    public void transform(ITestAnnotation annotation, Class testClass,
            Constructor testConstructor, java.lang.reflect.Method testMethod){

//          I intend to do this later
//          if(someCondition){
//              // Do something.
//          }
                System.out.println("Inside Transform");
    }
}

As of now I'm just trying to print.

I have many packages and classes on which I have to impose this Transformer.

How and Where should I initiate this class?

Please suggest any better methods for doing the same.

Thanks in advance

Rails 4 testing nested resources with multiple URL params

I have been at this for a few hours, still can't figure it out. I have 2 tests on 2 actions on a nested resources controller. requests is the parent resources route, and response is the nested resources route.

These 2 tests give me a no route matches error. Does not make sense. In the first test, it tries to run the update action instead of the edit. Here are my tests:

test "should get edit" do
  assert_routing edit_request_response_path(@myresponse.request_id, @myresponse), { :controller => "responses", :action => "edit", :request_id => @myresponse.request_id.to_s, :id => @myresponse.id.to_s }
  get :edit, params: { id: @myresponse, request_id: @myresponse.request_id }
  assert_response :success
end

test "should update response" do
  post :update, :request_id => @myresponse.request_id, response: { body: @myresponse.body, request_id: @myresponse.request_id, status: @myresponse.status, subject: @myresponse.subject, user_id: @myresponse.user_id }
  assert_redirected_to response_path(assigns(:response))
end

Here are the errors:

3) Error:
ResponsesControllerTest#test_should_get_edit:
ActionController::UrlGenerationError: No route matches {:action=>"edit", :controller=>"responses", :params=>{:id=>"980190962", :request_id=>"999788447"}}
test/controllers/responses_controller_test.rb:43:in `block in <class:ResponsesControllerTest>'


4) Error:
ResponsesControllerTest#test_should_update_response:
ActionController::UrlGenerationError: No route matches {:action=>"update", :controller=>"responses", :request_id=>"999788447", :response=>{:body=>"This is the body", :request_id=>"999788447", :status=>"draft", :subject=>"This is the subject", :user_id=>"175178709"}}
test/controllers/responses_controller_test.rb:48:in `block in <class:ResponsesControllerTest>'

Mobile App Fault seeding

I want to test Android mobile applications using my own testing strategy. I decide that best way to assess the testing strategy is to have case studies that inject faults and then try to catch these faults. so any one knows good case studies and the main question is how to inject faults into the app and how to catch them?

SUT - Testing Internal Behaviour TDD

I have a question about testing requirements.

Let's take example:

Class Article {

    public void deactive() {//some behaviour, which deactives article}

}

I have in my requirements, that Article can be deactived. I will implement it as part of my Article class.

Test will call deactive method, but what now? I need to check at the end of the test, if requirement was fulfilled so I need to implement method isDeactived.
And here is my question, if I really should do it? This method will be only used by my test case, nowhere else. So I am complicating my class interface, just to see assert, if is really deactived.

What is the right way to implement it?

Failing a test case

I have a test case as following:

# -*- perl -*-
use strict;
use warnings;
use tests::tests;
check_expected ([<<'EOF']);
Usage: ydi <option(s)> mini-elf-file
 Options are:
  -h --help      Display usage
  -H --header    Show the Mini-ELF header
EOF
pass;

And am trying to get my output to pass this case and match the above. Here is my printf statements:

    printf("Usage: ydi <option(s)> mini-elf-file\n");
    printf(" Options are:\n");
    printf("  -h --help      Display usage\n");
    printf("  -H --header    Show the Mini-ELF header\n");

Which gives me output as the following:

Usage: ydi <option(s)> mini-elf-file
 Options are:
  -h --help      Display usage
  -H --header    Show the Mini-ELF header

But I keep failing the test cases, is anyone able to see where I'm not matching the test case?

How to test a library which sends/receive HTTP request

I've built a library in clojure which sends/receive HTTP request, it's a client for some REST API web service. How can I test? I believe, my tests should send/receive real requests/responses. Then how can I test it? Firstly, I mean unit testing, then others.

What is the best tools to Automated my Websites and Monitoring them?

I have many websites up and running... and i would like to automate and monitoring them around the clock, to be able to detect early any error and fixed it before the business is down... what is your recommendation for free and paid tools ?

Thanks,

Trying to fix "Unexpected Request GET" with ng-html2js-preprocessor does not work

I want to test an angular directive with karma and jasmine. The directive uses an external template named complete.html.

I try to get the html template with the following code:

describe('directive', function () {

beforeEach(module('sampleapp'));

it('test',
    function(){


        inject(function ($rootScope, $compile) {

            var $scope = $rootScope.$new();
            var element = $compile(angular.element('<auto-complete></auto-complete>'))($scope);
              $scope.$digest();
            var textarea = element.find('textarea');
            expect(textarea.length).toBe(1);



        });
    }

); });

I then receive this error:

Error: Unexpected request: GET test/complete.html

I searched for a solution for my problem and found this post: Unit Testing AngularJS directive with templateUrl

I followed the instructions for the karma-solution. My karma.conf.js looks like the following:

  plugins : [
      'karma-ng-html2js-preprocessor',
      'karma-chrome-launcher',
      'karma-jasmine'
],

  // preprocess matching files before serving them to the browser
  // available preprocessors: http://ift.tt/1gyw6MG
  preprocessors: {
      '../complete.html': ['ng-html2js']
  },


  ngHtml2JsPreprocessor: {
      moduleName: "complete.html"
  },

  // frameworks to use
// available frameworks: http://ift.tt/1ft83uu
frameworks: ['jasmine'],


// list of files / patterns to load in the browser
files: [
    '../src/main/web/bower_components/jquery/dist/jquery.min.js' ,
    '../src/main/web/bower_components/angular/angular.min.js' ,
    '../src/main/web/bower_components/angular-route/angular-route.js',
    '../src/main/web/bower_components/angular-mocks/angular-mocks.js',
    '../complete.html',
    '../src/main/web/scripts/application.js',
    '../src/main/web/scripts/main.js',
    '../src/main/web/scripts/service.js',
    '../src/main/web/scripts/plz.json',
  'tests.js'

],

(I cropped out the unnecessary parts obviously)

Then as instructed i try to load the generated module complete.html in my test file:

describe('directive', function () {

beforeEach(module('sampleapp'));
beforeEach(module('complete.html'));


it('test',
    function(){


        inject(function ($rootScope, $compile) {

            var $scope = $rootScope.$new();
            var element = $compile(angular.element('<auto-complete></auto-complete>'))($scope);
              $scope.$digest();
            var textarea = element.find('textarea');
            expect(textarea.length).toBe(1);



        });
    }

);});

But i still get the error Error: Unexpected request: GET test/complete.html

I feel like I miss to do an important part but cant find a complete solution for the situation I am in. How do i correctly use the generated module so I dont get the error?

I would be very thankful for some help.

Run py.test on package and local files in same run/session?

If I run a package's test with:

py.test --pyargs package

I seem to lose the discovery of other tests from testpaths in my ini files. Is there a way to combine local tests from files and do package / module tests in one session? I'm looking for a way to programmatically run tests on a bunch of packages and run custom tests in local files in one run, but seemed stumped. If I could figure out how to collect package tests by calls in the tests in the local files, that would also solve my problem (as in if I could do something like:

def test_that_package():
    magically_collect_and_run_package_tests(package.tests())

) I just don't see if there are hooks for that either in py.test or in a plugin.

Python selenium webdriver dropdown menu how to select items

    <div id="isc_3B" class="scrollingMenu" onscroll="return isc_PickListMenu_0.$lh()" style="position: absolute; left: 403px; top: 63px; width: 450px; height: 298px; z-index: 800684; visibility: inherit; padding: 0px; box-sizing: border-box; overflow: hidden;" role="listbox" eventproxy="isc_PickListMenu_0" aria-hidden="false">
<div id="isc_3C" style="position: relative; display: inline-block; box-sizing: border-box; width: 100%; vertical-align: top; visibility: inherit; z-index: 800684; cursor: default;" eventproxy="isc_PickListMenu_0">
<div id="isc_3D" role="toolbar" tabindex="-1" onblur="if(window.isc)isc.EH.blurFocusCanvas(isc_Toolbar_1,true);" onfocus="if(event.target!=this)return;isc.EH.focusInCanvas(isc_Toolbar_1,true);" onscroll="return isc_Toolbar_1.$lh()" style="position: absolute; left: 0px; top: 0px; width: 434px; height: 22px; z-index: 200936; overflow: hidden; box-sizing: border-box; cursor: default; display: inline-block;" eventproxy="isc_Toolbar_1">
<div id="isc_3A" class="pickListMenuBody" tabindex="1439" onblur="if(window.isc)isc.EH.blurFocusCanvas(isc_PickListMenu_0_body,true);" onfocus="if(event.target!=this)return;isc.EH.focusInCanvas(isc_PickListMenu_0_body,true);" onscroll="return isc_PickListMenu_0_body.$lh()" style="position: absolute; left: 0px; top: 22px; width: 434px; height: 276px; z-index: 201026; overflow: hidden; background-color: white; box-sizing: border-box; cursor: default; display: inline-block; outline-style: none;" eventproxy="isc_PickListMenu_0_body">
<div id="isc_3N" style="position:absolute;overflow:visible;z-index:1000;width:432px">
<div id="isc_PickListMenu_0_body$28s" style="width:1px;height:0px;overflow:hidden;display:none;">
<table id="isc_3Atable" class="listTable" width="432" cellspacing="0" cellpadding="2" border="0" style="table-layout:fixed;overflow:hidden;padding-left:0px;padding-right:0px;" role="presentation">
<tbody></tbody>
<colgroup>
<tbody>
<tr id="isc_PickListMenu_0_row_0" aria-posinset="1" aria-setsize="686" role="option" aria-selected="false">
<td class="pickListCell" height="16" align="left" style="padding-top: 0px; padding-bottom: 0px; width: 143px; overflow: hidden;">
<div style="overflow:hidden;text-overflow:ellipsis;white-space:nowrap;WIDTH:139px;" cellclipdiv="true" role="presentation">Pens Stabiliner 808 Ballpoint Fine Black</div>
</td>
<td class="pickListCell" height="16" align="left" style="padding-top: 0px; padding-bottom: 0px; width: 144px; overflow: hidden;">
<div style="overflow:hidden;text-overflow:ellipsis;white-space:nowrap;WIDTH:140px;" cellclipdiv="true" role="presentation">Ea</div>
</td>
<td class="pickListCell" height="16" align="left" style="padding-top: 0px; padding-bottom: 0px; width: 145px; overflow: hidden;">
</tr>
<tr id="isc_PickListMenu_0_row_1" aria-posinset="2" aria-setsize="686" role="option">
<td class="pickListCellDark" height="16" align="left" style="WIDTH:143px;OVERFLOW:hidden;padding-top:0px;padding-bottom:0px;;white-space: nowrap;">
<td class="pickListCellDark" height="16" align="left" style="WIDTH:144px;OVERFLOW:hidden;padding-top:0px;padding-bottom:0px;;white-space: nowrap;">
<td class="pickListCellDark" height="16" align="left" style="WIDTH:145px;OVERFLOW:hidden;padding-top:0px;padding-bottom:0px;;white-space: nowrap;">
</tr>
<tr id="isc_PickListMenu_0_row_2" aria-posinset="3" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_3" aria-posinset="4" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_4" aria-posinset="5" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_5" aria-posinset="6" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_6" aria-posinset="7" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_7" aria-posinset="8" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_8" aria-posinset="9" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_9" aria-posinset="10" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_10" aria-posinset="11" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_11" aria-posinset="12" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_12" aria-posinset="13" aria-setsize="686" role="option">
**<tr id="isc_PickListMenu_0_row_13" aria-posinset="14" aria-setsize="686" role="option" aria-selected="true">
<td class="pickListCellSelectedDark" height="16" align="left" style="padding-top: 0px; padding-bottom: 0px; width: 143px; overflow: hidden;">
<div style="overflow:hidden;text-overflow:ellipsis;white-space:nowrap;WIDTH:139px;" cellclipdiv="true" role="presentation">Adding Machine Roll 57x57mm Lint Free</div>
</td>
<td class="pickListCellSelectedDark" height="16" align="left" style="padding-top: 0px; padding-bottom: 0px; width: 144px; overflow: hidden;">
<td class="pickListCellSelectedDark" height="16" align="left" style="padding-top: 0px; padding-bottom: 0px; width: 145px; overflow: hidden;">
</tr>**
<tr id="isc_PickListMenu_0_row_14" aria-posinset="15" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_15" aria-posinset="16" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_16" aria-posinset="17" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_17" aria-posinset="18" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_18" aria-posinset="19" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_19" aria-posinset="20" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_20" aria-posinset="21" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_21" aria-posinset="22" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_22" aria-posinset="23" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_23" aria-posinset="24" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_24" aria-posinset="25" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_25" aria-posinset="26" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_26" aria-posinset="27" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_27" aria-posinset="28" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_28" aria-posinset="29" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_29" aria-posinset="30" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_30" aria-posinset="31" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_31" aria-posinset="32" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_32" aria-posinset="33" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_33" aria-posinset="34" aria-setsize="686" role="option">
<tr id="isc_PickListMenu_0_row_34" aria-posinset="35" aria-setsize="686" role="option">
</tbody>
</table>
<div id="isc_PickListMenu_0_body$284" style="width:1px;height:10416px;overflow:hidden;">
<table style="position:absolute;top:0px;font-size:1px;height:100%;width:100%;z-index:1;overflow:hidden;visibility:hidden;">
</div>
</div>
<div id="isc_3Q" class="scrollbar" onscroll="return isc_PickListMenu_0_body_vscroll.$lh()" style="position: absolute; left: 434px; top: 22px; width: 16px; height: 276px; z-index: 201027; overflow: hidden; box-sizing: border-box; cursor: default; display: inline-block;" dir="ltr" eventproxy="isc_PickListMenu_0_body_vscroll">
<div id="isc_3R" class="vScrollThumb" aria-label=" " onscroll="return isc_PickListMenu_0_body_vscroll_thumb.$lh()" style="position: absolute; left: 434px; top: 38px; width: 15px; height: 20px; z-index: 201033; overflow: hidden; box-sizing: border-box; cursor: default; display: inline-block;" eventproxy="isc_PickListMenu_0_body_vscroll_thumb">
<div id="isc_3O" class="scrollbarDisabled" onscroll="return isc_PickListMenu_0_body_hscroll.$lh()" style="position: absolute; left: 0px; top: 22px; width: 1px; height: 1px; z-index: 201027; overflow: hidden; box-sizing: border-box; cursor: default; display: inline-block; visibility: hidden;" dir="ltr" eventproxy="isc_PickListMenu_0_body_hscroll" aria-hidden="true">
<div id="isc_3P" class="hScrollThumb" aria-label=" " onscroll="return isc_PickListMenu_0_body_hscroll_thumb.$lh()" style="position: absolute; left: 16px; top: 22px; width: 5px; height: 1px; z-index: 201033; overflow: hidden; box-sizing: border-box; cursor: default; display: inline-block; visibility: hidden;" eventproxy="isc_PickListMenu_0_body_hscroll_thumb" aria-hidden="true">
<div id="isc_3T" aria-label="corner menu" role="button" tabindex="1490" onblur="if(window.isc)isc.EH.blurFocusCanvas(isc_PickListMenu_0_sorter,true);" onfocus="if(event.target!=this)return;isc.EH.focusInCanvas(isc_PickListMenu_0_sorter,true);" onscroll="return isc_PickListMenu_0_sorter.$lh()" style="POSITION:absolute;LEFT:434px;TOP:0px;WIDTH:16px;HEIGHT:22px;Z-INDEX:200942;OVERFLOW:hidden;box-sizing:border-box;CURSOR:default;display:inline-block" eventproxy="isc_PickListMenu_0_sorter">
</div>
</div>
</body>

Do you have any ideas how to select options from that menu using python webdriver? selenium ide is not helpful at all in this case. I was trying to select it by row id, name text and It's not working

every tr is option on the dropdown menu like: > Adding Machine Roll 57x57mm Lint Free

samedi 30 janvier 2016

Testing Dominion Card Game in C

I've been asked to write test programs on 4 functions in the card game dominion. I've written one (extremely simple) just to make sure I can get it to pass as I'm pretty new to testing. However, I continually get a syntax error at runtime that I cannot figure out.

    #include <stdlib.h>
    #include <stdio.h>
    #include <assert.h>
    #include "dominion.h"
    #include "dominion_helpers.h"
    #include "rngs.h"

    int main() {
        int r = 0, j = 0;
        int adventurer = 8;
        int greathall = 17;

        r = getCost(adventurer);
        assert(r == 6);

        j = getCost(greathall);
        assert(j == 3);

        return 0;
    }

When I compile it, I do get some warnings:

    Undefined symbols for architecture x86_64:
      "_getCost", referenced from:
          _main in unittest1-7d7bf2.o
    ld: symbol(s) not found for architecture x86_64
    clang: error: linker command failed with exit code 1 (use -v to see invocation)

Which I'm not sure about either, but the base code that we are given from our instructor, as well as all of the other code from my classmates, has these warnings as well.

However, when running I get the following error:

    ./unittest1.c: line 8: syntax error near unexpected token `('
    ./unittest1.c: line 8: `int main() {'

I've tried rewriting it in a blank file thinking there was some invisible characters or something but I still get this error. Does anyone see something wrong in my code? Any help is appreciated.

**getCost is called in dominion_helpers

Unit testing asynchronous function which connects to database

Well I have some functions which connect to database (redis) and return some data, those functions usually are based on promises but are asynchronous and contain streams. I looked and read some things about testing and I chose to go with tape, sinon and proxyquire, if I mock this function how I would know that it works?

The following function (listKeys) returns (through promise) all the keys that exist in the redis db after completes the scanning.

let methods = {
    client: client,
    // Cache for listKeys
    cacheKeys: [],
    // Increment and return through promise all keys
    // store to cacheKeys;
    listKeys: blob => {
        return new Promise((resolve, reject) => {
            blob = blob ? blob : '*';

            let stream = methods.client.scanStream({
                match: blob,
                count: 10,
            })

            stream.on('data', keys => {
                for (var i = 0; i < keys.length; i++) {
                    if (methods.cacheKeys.indexOf(keys[i]) === -1) {
                        methods.cacheKeys.push(keys[i])
                    }
                }
            })

            stream.on('end', () => {
                resolve(methods.cacheKeys)
            })

            stream.on('error', reject)
        })
    }
}

So how do you test a function like that?

automatic request to web server for testing purpose

I want to test my java web application for many user. for example- 2 or 3 thousand users request at the same time. It is in my local web server. Is there any specific software or popular technique, that can be used.

Run Nunit tests using teamcity on deployed site

I have configured TeamCity with Git to get my ASP.NET MVC project. I added tests with NUnit as the last step.

But one test checks method which works only on the machine where my project is deployed (access restriction peculiarity).

So test fails because it tests code that is being deployed on TeamCity deploying agent machine. I have to run tests against deployed environment somehow.

Can I somehow make my tests check functionality of projects on the deployed to machine site or run dll with tests from the directory where the site has been deployed to?

Passing driver/object to other page/class - java.lang.NullPointerException

I cant pass the driver/object to next class/page and have same NullPointerException in first/beginning class.

PageObject class - SearchResultsPage:

public class SearchResultsPage extends BasePage{

    @FindBy(xpath = "//*[@data-original-title=\"Compare this Product\"]")
    List <WebElement> compareButton;

    @FindBy(partialLinkText = "Product Compare")
    WebElement urlComparePage;

    public SearchResultsPage(WebDriver driver) {
        super(driver);
        PageFactory.initElements(driver, this);
    }   
    public void compareItems(){
        for(WebElement compareButtons: compareButton){
            compareButtons.click();
        }
    }

    public void goToComparePage(){
        urlComparePage.click();

    }
}

PageObject class HomePage:

public class HomePage extends BasePage{

    public HomePage(WebDriver driver) {
        super(driver);
        PageFactory.initElements(driver, this);
    }
    public String PAGE_TITLE = "Your Store";
    WebDriver driver;   
    @FindBy(className = "input-lg")
    WebElement inputSearch; 
    @FindBy(className = "btn-lg")
    WebElement searchButton;        

    public void isHomePage(){
        String pageTitle = driver.getTitle();
        Assert.assertEquals(pageTitle, PAGE_TITLE);
    }

    public void inputIntoSearch(){
        String itemName = "ipod";
        inputSearch.sendKeys(itemName);
    }

    public  SearchResultsPage clickSearchButton(){
        searchButton.click();
        return PageFactory.initElements(driver, SearchResultsPage.class);
    }
}

Test class:

public class MainPage {
    HomePage hp;
    TopNavigation topNav;
    ComparePage cp;
    SearchResultsPage srp;

    @BeforeTest
    public void setUp(){
    WebDriver driver = new FirefoxDriver();
    driver.get("http://ift.tt/1lEuO3l");
    driver.manage().window().maximize();
    hp =  PageFactory.initElements(driver, HomePage.class);
    topNav = PageFactory.initElements(driver, TopNavigation.class);
    cp = PageFactory.initElements(driver, ComparePage.class);
    driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);

    }

    @Test(priority = 0)
    public void checkIsHomePage(){
        hp.isHomePage();
    }

    @Test
    public void changeCurrency(){
        topNav.clickButtonChangeCurrency();
        topNav.setCurrency();
    }
    @Test
    public void searchProducts(){
        hp.inputIntoSearch();
        hp.clickSearchButton();
    }
    @Test
    public void addToCompare(){
        srp.compareItems();
    }
}

And I have 2 problem:

1.When I run test checkIsHomePage() - FAILS (NullPointerException) and changeCurrency() PASS. I dont Know why first test is FAIL if thist 2 methods are in the same PageObiect class - HomePage. What is wrong?

2.When searchProduct method Pass I want to compare product using addToCompare(), but I have no idea how use PageFactory.initelements to make test on - page with search results. How should I do this?

A Study And Analysis Of Different Performance Testing Tools Used For Cloud Applications

Can anyone working on cloud spare a few moments to take my survey titled "Different Performance Testing Tools Used For Cloud Applications". Your feedback is important!! http://ift.tt/1KMCZpm

vendredi 29 janvier 2016

I am running mujava. My mutants are not getting killed at all. They are getting generated properly but not getting killed. When i run mujava,

I am running mujava i.e. mutation testing framefork for java. I tried to run it using different versions of jdk but my mutants are not getting killed at all. They are getting generated properly but not getting killed. When i run mujava, i get following warning too in my terminal "Note: C:\mujava\result\vendingMachine\traditional_mutants\int_getCredit()\SDL_20\vendingMachine.java uses unchecked or unsafe operations. Note: Recompile with -Xlint:unchecked for details.". Please help me out

Automation.AddStructureChangedEventHandler doesn't work?

I'm trying to use System.Windows.Automation library to do some UI testing, and was able to make some progress, but I am unable to subscribe to the event of creating a popup window in my app. I've tried to use Automation.AddStructureChangedEventHandler on the root (desktop object) as well as on the window, but that didn't work. I also tried using different scopes, which also didn't help.

AutomationElement desktop = AutomationElement.RootElement;
AutomationElement app = desktop
    .FindFirst(TreeScope.Children,
        new PropertyCondition(AutomationElement.NameProperty,
            "Name of the App", PropertyConditionFlags.IgnoreCase));

ActivateWindow(app);

AutomationElement appWindow = app
    .FindFirst(TreeScope.Children,
        new PropertyCondition(AutomationElement.ControlTypeProperty,
            ControlType.Window));

// Find a button that opens a popup window and click it
AutomationElement button = appWindow
    .FindFirst(TreeScope.Children, Condition.TrueCondition)
    .FindAll(TreeScope.Children, Condition.TrueCondition)[8];

MoveMouseToAndClick(button);

Automation.AddStructureChangedEventHandler(desktop, TreeScope.Descendants, setupWindowOpen);

The setupWindowOpen handler fires sometimes, but it looks like that happens for other apps, not mine (I'm seeing Internet Explorer ids on the sender element object). Thanks in advance.

Promise catch not being caught in test

I have some code that looks like:

function foo(){
  bar().catch(function(){
    //do stuff
  }
}

function bar(){
  return promiseReturner().then(
    function(result){
      if(_.isEmpty(result)){
        throw "Result is empty";
      }
    }
  )
}

I'm trying to test that the //do stuff block is called when result is empty:

deferred.resolve(null);
foo();
$rootScope.$apply();

Now this does actually trigger the throw block, but for some reason that throw block is not being caught by the catch function. Whats more interesting is that when this same code runs outside of the test environment, it behaves as expected.

Why am I unable to trigger the catch block in my test code?

Workflow for testing endianess compatibility

What is a good workflow for testing code written with support for varying endianess? For example, I can test against my current architecture but what about testing (rather than guessing) that I haven't made a mistake for others?

Only BE and LE are of concern but I understand that there are some mixed endian systems out there. Is there a simple way to automate tests against these architectures?

My environment is Windows (little endian) so Bochs isn't a good solution. Currently looking at installing a BSD on QEMU.

Click on random button/link

I have problem. I want to random click on button "add to cart" Here is a website: demo.opencart -compare

Im not sure what I do wrong- my code:

List <WebElement> links = driver.findElements(By.xpath("//input[@value = \"Add to Cart\"]"));
            Random gen = new Random();
            WebElement link = links.get(gen.nextInt(links.size()));

            link.click();

and i have error:

java.lang.IllegalArgumentException: bound must be positive
    at java.util.Random.nextInt(Random.java:388)
    at pages.HomePage.chooseRandomItem(HomePage.java:112)
    at testes.MainPage.chooseRandomItemToCart(MainPage.java:62)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:86)
    at org.testng.internal.Invoker.invokeMethod(Invoker.java:643)
    at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:820)
    at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1128)
    at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:129)
    at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:112)
    at org.testng.TestRunner.privateRun(TestRunner.java:782)
    at org.testng.TestRunner.run(TestRunner.java:632)
    at org.testng.SuiteRunner.runTest(SuiteRunner.java:366)
    at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:361)
    at org.testng.SuiteRunner.privateRun(SuiteRunner.java:319)
    at org.testng.SuiteRunner.run(SuiteRunner.java:268)
    at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
    at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
    at org.testng.TestNG.runSuitesSequentially(TestNG.java:1244)
    at org.testng.TestNG.runSuitesLocally(TestNG.java:1169)
    at org.testng.TestNG.run(TestNG.java:1064)
    at org.testng.remote.RemoteTestNG.run(RemoteTestNG.java:113)
    at org.testng.remote.RemoteTestNG.initAndRun(RemoteTestNG.java:206)
    at org.testng.remote.RemoteTestNG.main(RemoteTestNG.java:177

I would like to ask U to help me

Factory_girl how to create HABTM association using traits declared for child?

How does one create a HABTM relationship using factory_girl using traits declared for the child?

This is basically what I am trying to do:

factory :site do
  name 'some domain name'

  trait :rejected do
    rejected true
  end

  trait :processing do
    processing true
  end
end

factory :user do
  trait :bob do
    name 'Bob'

    after(:create) do |user, evaluator|
      create_list(:rejected, 1, users: [user])
      create_list(:processing, 1, users: [user])
    end
  end
end

class User < ActiveRecord::Base
  has_and_belongs_to_many :sites
end

I have been searching for examples like above, but everything I have found so far only has the trait in the parent and just uses the child factory. When I try to reference the child trait, I get a Factory not registered error.

In addition, the list is not going to change. I don't need this to be dynamic. I know what sites I want to give to each user.

How can I split my Java Selenium tests into separate classes?

I'm currently working at my job to perform GUI testing of our web page using Selenium 2 via Java in Eclipse. I've been trying to program my tests in such a way that I maximize the amount of code I can reuse and as a consequence I now have a lot of helper methods that function almost like a framework. This has lead to my test class becoming fairly bloated with only one method used as the actual test and the rest being the implementation of the test.

Currently I just run the testing right from eclipse with all my methods being static.

From what I understand there are a couple different ways I could try to separate things out:

One way would be to put all the methods into a class I use as a framework and extend it when writing an actual test, but I don't know if having a framework in a framework (Selenium) makes sense.

Another way would possibly be making my helper methods into an object where I can have one of these objects for each test. I don't know if this is good practice though, or if it will cause problems down the road. It would also mean I'd have to type more to do the same amount of testing.

My main questions are:

What's the best way to split up my testing class into test classes and an implementation class? Is what I'm doing outside the intended usage of Selenium?

Can't run cucumber tests with protractor

Every time when I am running tests, I got the error: TypeError: e.getContext is not a function

I am using examples from http://ift.tt/13lC1e1 with some changes in world.js (made them to fix timeout errors)

Versions of apps:

  • node 4.2.6
  • cucumber 0.9.4
  • protractor 3.0.0
  • protractor-cucumber-framework 0.3.3
  • zombie 4.2.1

My world.js:

// features/support/world.js
var zombie = require('zombie');
zombie.waitDuration = '30s';
function World() {
  this.browser = new zombie(); // this.browser will be available in step definitions
  this.visit = function (url, callback) {
    this.browser.visit(url, callback);
  };
}

module.exports = function() {
  this.World = World;
  this.setDefaultTimeout(60 * 1000);
};

My sampleSteps.js:

// features/step_definitions/my_step_definitions.js

module.exports = function () {
  this.Given(/^I am on the Cucumber.js GitHub repository$/, function (callback) {
    // Express the regexp above with the code you wish you had.
    // `this` is set to a World instance.
    // i.e. you may use this.browser to execute the step:

    this.visit('http://ift.tt/13lC1e1', callback);

    // The callback is passed to visit() so that when the job's finished, the next step can
    // be executed by Cucumber.
  });

  this.When(/^I go to the README file$/, function (callback) {
    // Express the regexp above with the code you wish you had. Call callback() at the end
    // of the step, or callback.pending() if the step is not yet implemented:

    callback.pending();
  });

  this.Then(/^I should see "(.*)" as the page title$/, function (title, callback) {
    // matching groups are passed as parameters to the step definition

    var pageTitle = this.browser.text('title');
    if (title === pageTitle) {
      callback();
    } else {
      callback(new Error("Expected to be on page with title " + title));
    }
  });
};

My sample.feature:

# features/my_feature.feature

Feature: Example feature
  As a user of Cucumber.js
  I want to have documentation on Cucumber
  So that I can concentrate on building awesome applications

  Scenario: Reading documentation
    Given I am on the Cucumber.js GitHub repository
    When I go to the README file
    Then I should see "Usage" as the page title

My protractor-conf.js:

exports.config = {

  specs: [
    'features/**/*.feature'
  ],

  capabilities: {
    'browserName': 'chrome'
  },

  baseUrl: 'http://127.0.0.1:8000/',

    framework: 'custom',
    frameworkPath: require.resolve('protractor-cucumber-framework'),
  // relevant cucumber command line options
  cucumberOpts: {
    require: ['features/support/world.js', 'features/sampleSteps.js'],
    format: "summary"
  }
};

GUI testing coverage

I have two questions. My first question is: Do applications exist which measure the coverage of GUI testing for web applications (not code, but the coverage of GUI components on web page)?

My second question is: Is GUI testing with Selenium for example necessary if we have tests for javascript as well?

Thank you in advance.

Capture console log of iframe with PhantomJS

Is it possible to capture console log of a specific iframe with PhantomJS?

page.onConsoleMessage = function () {}

Jasmine,Mocha and Karma

I am beginning to work with error testing. I have done a lot of research but would like to know is there any main advantages or disadvantages with using karma test runner with jasmine or mocha.

UFT PDF COMPARISON

I need to compare a generated pdf with uft recorder and a reference test I have in my document. I think I have to use something like :

FileContent("nameofmyrefpdf.pdf").Check CheckPoint("nameofmynewpdf") Am I right ?

Thanks by advance for any help

Test Data Generation: How to automate? [Excel -> SQL]

I want to automate my test data generation and hoped for some approaches.

My current situation: I have two excel sheets with static data (patients and doctors) and some with dynamic data [for example weight records for an active patient, weight records for an inactive patient, etc.] (these values are not static, because it should be different for every patient and I want to see trends and dependencies). But what I want is to create INSERT INTO Statements out of this. It needs to be considered that for every patient a dynamic sheet must be calculated and that there is a foreign key relation.

What is the best way to do this?

Best regards and thanks in advance.

Is it possible to performance test the webmethods platform (integration servers) using JMETER

Is it possible to performance test the webmethods platform (integration servers) using JMETER, if so can someone talk me through the steps.

many thanks!

How to integrate sauce-connect-launcher on webdriver.io

How to launch the tunnel from sauce-connect-launcher and launch into saucelabs servers the testing.

jeudi 28 janvier 2016

Validate Video View appium (java) android

I am automating android app using Appium (java).
I have android.widget.VideoView on the page and using verifyElementByClass , I can check whether the Video View exists on that page.
Now I need to check if it is playing any video or not?
How to do that?

Whats a a good sample for mocking in java testing?

Is there any good sample of a EntityManager Mock in java?

Thanks in advance

Testing Socket.io Event Handler using Sinon

I am trying to make a functional test of a socket.io event handler. To make it simple, we will just look on the 'connect' event on the client.

Suppose:

MyClass.prototype.onConnected = function(){ console.log('hello'); }
MyClass.prototype.connect = function() {
     this.socket.connect(...params...);
};
...
// Later on in the instance `myclass`
...
sinon.spy(this, 'onConnected');
...
this.socket.on('connect', this.onConnected);

The above code is supposed to attach a sinon spy to the 'connect' event handler of the socketio-client. Therefore, I would like to make a test which could assert like:

describe(...
     it('should connect', function(){
          myClass.connect();
          myClass.onConnected.should.have.been.called;
     }
     ...
);

Now I see the problem with the ticks here. Assertation is executed way before the socket.io sends the connect event. Therefore, test fails. One way around this is using timers or setTimeout to let some time pass between the connect call and the assertation. But I have no way of knowing how long the response will be delayed. Suppose I have 1000 tests like this, then this would make me wait hundreds of seconds till the test-suite finishes! Another problem is that, at that time my computer could be loaded so much that socketio may respond much later than usual. In that case, faketimers wont work at all.

Is there a nice, neat solution for this problem? I have been looking at the chai-as-promised, which utilizes the eventually keyword. But it won't solve my case either without uglifying all the code with promise resolvers (and also, I won't be able to check how many times the eventHandler is called etc...) Please help!

What is the Best Practice for testing api controllers(active model serializers)?

I am working on an app and up to this point I have been testing just stuff like authentication and request response codes. but it seems like a good idea to test the structure of the payload. ie. if there is embedded resources or sidloaded resources. how do you guys test this. here is a sample of some of the testing I am doing. I am using active model serializers. but seems like a bit of cruft to organize.

describe '#index' do
  it 'should return an array of email templates' do
    template = EmailTemplate.new(name: 'new email template')
    EmailTemplate.stub(:all).and_return([template])
    get :index
    payload = {:email_templates => [JSON.parse(response.body)["email_templates"][0].symbolize_keys]}
    template_as_json_payload = {:email_templates => [ActiveModel::SerializableResource.new(template).as_json[:email_template] ]}
    expect(payload).to eq(template_as_json_payload)
  end
end

How does ember test discover tests for in-repo-addons?

TL;DR Is anyone successfully running unit tests from within an in-repo-addon?

I saw a related post (ember-cli run tests from in-repo addons) that mentioned that it is possible to write tests in a "test-support" folder within the in-repo-addon. I tried this, and the tests are not being found when running ember test. Is there something else required to be able to discover these tests? Or perhaps something has changed since the post?

How Test Cases in Selenium/TestNG Executed without Main Method?

Java Always Need Main method to execute Program but when we use TestNG Framework we dont use main method, the how does the suite Executes

Testing multiple public methods that calls the same private method

I'm trying to figure out whether it's an API design flaw, it is actually OK, or the SRP is being violated.

I'm having 2 public methods initialize() and onListRefresh(). Both of them call the same private method updateList(). The only difference between both of them is that initialize() also check for a null argument to throw an exception.

The issue is that in order to test both public methods, I practically have to copy paste the same mocks, stub, expectations and assertions, which are all for what happens on the private method; And it feels wrong. So which one is it:

  1. Is there a flaw in the public API design?
  2. It's alright, that's how it's supposed to be.
  3. You're violating SRP by using initialize() to do both checking for an argument AND calling updateList()

How to test ParseException that is thrown by SimpelDateFormat? [duplicate]

This question already has an answer here:

@Rule
public final ExpectedException exception = ExpectedException.none();

@Test
public void doStuffThrowsParseException() {
    TestException foo = new TestException();

    exception.expect(ParseException.class);
    foo.doStuff("haha");
}

main

Date submissionT;
SimpleDateFormat tempDate = new SimpleDateFormat("EEE MMM d HH:mm:ss z yyyy");
public void doStuff(String time) {
    try {
        submissionT=tempDate.parse(time);
      }
      catch (Exception e) {     
        System.out.println(e.toString() + ", " + time);
      }
}

results in java.text.ParseException: Unparseable date: "haha", haha.

Although a ParseException is thrown the test fails. Why?

Jasmine tests with Angular and ui bootstrap Modal keep failing. Why are my spy's not being called?

I am getting these three errors below when running my jasmine tests:

PhantomJS 1.9.8 (Linux 0.0.0) confirmOnExit directive record is dirty unsaved changes modal should call record.$save and call hideTab when saveChanges is called FAILED
Expected spy $save to have been called.

PhantomJS 1.9.8 (Linux 0.0.0) confirmOnExit directive record is dirty unsaved changes modal should call record.revert() and call hideTab when discardChanges is called FAILED
Expected spy revert to have been called.

PhantomJS 1.9.8 (Linux 0.0.0) confirmOnExit directive record is dirty unsaved changes modal should not save, call cancelHide, and close the modal when cancel is called FAILED
Expected 4 to be 3.

The functionality of the app works as it's supposed to but these three tests keep failing. Does anyone know why the spy's aren't being called? The test code is below:

    /* globals jasmine */

    describe('confirmOnExit directive', function() {
      var $compile, $rootScope, scope, compiledElement, directiveCtrl,
        UnsavedChangesModalSvc, state, $timeout, $document, modal,
        modalScope, modalCtrl, OnBeforeUnload;

      beforeEach(angular.mock.module('arcContentForms', function($provide) {
          $provide.value('OnBeforeUnload', {
            register: function() {}
          });
        }));
      beforeEach(angular.mock.module('ui.bootstrap'));
      beforeEach(angular.mock.module('arcAlerts'));

      var fakeTab = {
        id: '1234',
        hideTab: function() {},
        cancelHide: function() {}
      };



      function findButtonWithText(text) {
        var buttons = angular.element($document).find('button');
        for (var i = 0; i < buttons.length; i++) {
          var re = new RegExp(text, 'g');
          if (angular.element(buttons[i]).html().match(re)) {
            return angular.element(buttons[i]);
          }
        }
        return [];
      }

      function findElementByClass(element, cls) {
        var buttons = angular.element($document).find(element);
        var modal = [];
        for (var i = 0; i < buttons.length; i++) {
          if (buttons[i].classList.contains(cls)) {
            modal = angular.element(buttons[i]);
          }
        }
        return modal;
      }

      function findNumberOfModals() {
        var buttons = angular.element($document).find('div');
        var modal = 0;
        for (var i = 0; i < buttons.length; i++) {
          if (buttons[i].classList.contains('modal')) {
            modal++;
          }
        }
        return modal;
      }


      beforeEach(inject(function(_$compile_, _$rootScope_, _OnBeforeUnload_, _UnsavedChangesModalSvc_,
              _$state_, _$window_, $q, _$timeout_, _$document_) {
        $document = _$document_;
        $timeout = _$timeout_;
        state = _$state_;
        OnBeforeUnload = _OnBeforeUnload_;
        spyOn(OnBeforeUnload, 'register');

        UnsavedChangesModalSvc = _UnsavedChangesModalSvc_;
        $compile = _$compile_;
        $rootScope = _$rootScope_;
        scope = $rootScope.$new();
        scope.record = {
          _status: "",
          $save: function() {
            var deferred = $q.defer();
            $timeout(function() {
              deferred.resolve({});
            });
            return deferred.promise;
          },
          revert: function() {
            // changes everything back
          }

        };

        scope.tabId="1234";

        spyOn(UnsavedChangesModalSvc, 'createModal').and.callThrough();
        spyOn(scope.record, '$save').and.callThrough();

        fakeTab = {
          id: '1234',
          hideTab: function() {},
          cancelHide: function() {}
        };
        spyOn(fakeTab, "hideTab");
        spyOn(fakeTab, "cancelHide");

        compiledElement = $compile('<form name="testForm" class="form-horizontal" arc-confirm-on-exit="record" arc-confirm-on-exit-tab-id="tabId"></form>')(scope);
        scope.$digest();
      }));


      it('should compile', function() {
        expect(compiledElement).toBeDefined();
      });

      it('should register clearSessionStorage with the onBeforeUnload service', function() {
        expect(OnBeforeUnload.register).toHaveBeenCalled();
      });

      describe('record is dirty', function() {
        beforeEach(function() {
          scope.record.isDirty = function() {
            return true;
          };
        });

        it('should call createModal when state is changed', function() {
          $rootScope.$broadcast("arc::tab::notifyClose",  fakeTab);
          scope.$digest();
          expect(UnsavedChangesModalSvc.createModal).toHaveBeenCalled();
        });

        describe('unsaved changes modal', function() {
          beforeEach(function() {
            $rootScope.$broadcast("arc::tab::notifyClose", fakeTab);
            scope.$digest();
            modal = findElementByClass('div', 'modal');
            modalScope = modal.scope();
            modalCtrl = modalScope.UnsavedChangesCtrl;
          });

          it('should call record.$save and call hideTab when saveChanges is called', function() {
            modalCtrl.saveChanges();
            $timeout.flush();
            expect(scope.record.$save).toHaveBeenCalled();
            expect(fakeTab.hideTab).toHaveBeenCalled();
          });

          it('should call record.revert() and call hideTab when discardChanges is called', function() {
            spyOn(scope.record, 'revert');
            modalCtrl.discardChanges();
            $timeout.flush();
            expect(scope.record.revert).toHaveBeenCalled();
            expect(fakeTab.hideTab).toHaveBeenCalled();
          });

          it('should not save, call cancelHide, and close the modal when cancel is called', function() {
            var numModals = findNumberOfModals();
            spyOn(scope.record, 'revert');
            modalCtrl.cancel();
            $timeout.flush();
            expect(findNumberOfModals()).toBe(numModals-1);
            expect(scope.record.revert).not.toHaveBeenCalled();
            expect(fakeTab.hideTab).not.toHaveBeenCalled();
            expect(fakeTab.cancelHide).toHaveBeenCalled();
          });
        });
      });

      describe('record is clean', function() {
        beforeEach(function() {
          scope.record.isDirty = function() {
            return false;
          };
        });

        it('should call not createModal when state is changed', function() {
          $rootScope.$broadcast("arc::tab::notifyClose", fakeTab);
          scope.$digest();
          expect(UnsavedChangesModalSvc.createModal).not.toHaveBeenCalled();
        });
      });

      describe('different tab', function() {
        beforeEach(function() {
          scope.record.isDirty = function() {
            return true;
          };
        });


        it('should call not createModal when state is changed', function() {
          $rootScope.$broadcast("arc::tab::notifyClose", {id: 'not1234'});
          spyOn(scope.record, 'isDirty');
          scope.$digest();
          expect(scope.record.isDirty).not.toHaveBeenCalled();
        });
      });
    });

How to run script in mvn as test

I want to run test scripts (bash/shell) in mvn, but I want that when the script exited with a problem - the build will be "UNSTABLE", not "FAILURE" (In Jenkins: yellow instead of red). How can I do it?

QA and testing: a crossing area approach

I come with a request of advices, hints, path to follow, etc.

The point is the following. My goal is to develop a QA/testing area into a software company whose product is a web application.

As usual there are a many functional testers that ensure the application has the right quality for being released to production.

My goal is to apply an strategy for helping the company to grow at the same time the number of testers do not do so exponentially. I mean if the functionality increases it is natural to increase the number of testers. Mainly I want to restrict the number of manual testers because I find it not as productive as it should be. You know one of the main human characteristic is that we commit mistakes.

So, the straightforward answer is to implement automation, I am already aware of some/many tools for doing so and we are already doing so. However, I wish to go even deeper. On the one hand side usually these tools are oriented to the final product but I wish to have a broader approach focused on the quality of each single artifact produced, in other words if each single building block has a high quality the probability of having a high quality on the system built by combining them is going to increase. So my goal is to apply QA to as many artifacts the different areas of the company produces, let say, technical design, implementation/code, may be architecture artifacts, etc. And of course to the whole product.

I have some ideas for some artifacts but no idea for some others. Any comment or criticism is going to be welcomed.

My plan is the following

Final product: default approach: manual plus automatic testing

Code: Unit testing combined with static analysis

Technical design: Formal specification languages such as TLA or event-B

Architecture artifats: No idea, my knowledge is quite short

Besides the skills needed for carrying it out, my question are the following:

  • Do you think it is doable or realistic?
  • Do you think this approach can help to improve the quality of the product?
  • Is it worth?
  • Tools to consider...

Thanks a lot in advance.

Jenkins JaCoCo Coverage with multiple classes with "$"

Using JaCoCo Emma Jenkins Plugin for a long time and successfully getting code coverage metrics but have some repetition of classes with "$" sign. Which brings down line coverage metrics. E.g. VideoFragment.class but have multiple classes with:

  1. VideoFragment$1
  2. VideoFragment$2
  3. VideoFragment$3
  4. VideoFragment$4 and so on

I can just ignore them it brings up the metrics but wondrering why it shows up and it that ok to ignore them?

enter image description here

Trouble checking if the returnType is HttpNotFoundResult using xunit testing mvc.controller

I am trying to test a Microsoft.AspNet.Mvc.Controller that return Task<IActionResult> if the id passed in gets a hit, and returns a HttpNotFound() if no hits.

How can I, using xUnit, test to see if what i get back is the HttpNotFound or an actual result?

This is Controller method:

[HttpGet("{id}")]
public async Task<IActionResult> Get(string id)
{
    var company = await _repository.GetSingle(id);
    if (company == null)
        return HttpNotFound();

    return new ObjectResult(company);
}

And this is the test method (that is not working):

[Theory]
[InlineData("1")]
[InlineData("01")]
[InlineData("10")]
public async void TestGetSingleNonExistingCompany(string id)
{
    var controller = new CompanyController(new CompanyRepositoryMock());
    try
    {
        var res = await controller.Get(id);
        Assert.False(true);
    }
    catch (Exception e)
    {
        Assert.True(true);
    }
}

The problem I guess is that the controller.Get(id) does not actually throw an Exception, but I cannot use typeOf, because the type of the res variable is decided at compile-time, and not runtime.

When running the Assert.IsType :

[Theory]
[InlineData("1")]
[InlineData("01")]
[InlineData("10")]
public async void TestGetSingleNonExistingCompany(string id)
{
    var controller = new CompanyController(new CompanyRepositoryMock());
    var res = await controller.Get(id);
    Assert.IsType(typeof (HttpNotFoundResult), res.GetType());
}

I get this message:

Assert.IsType() Failure
Expected: Microsoft.AspNet.Mvc.HttpNotFoundResult
Actual:   System.RuntimeType

Can ideas?

How to add 'testListener' to custom test reporter in SBT

I'm having difficulty implementing a custom test reporter in Scala using SBT.

I have a basic SBT project set up with this build.sbt file:

name := "Test Framework"

version := "0.1"
scalaVersion := "2.11.7"

scalacOptions += "-deprecation"
scalacOptions += "-feature"

libraryDependencies += "org.scalatest" % "scalatest_2.11" % "2.2.4" % "test"
libraryDependencies += "org.scalacheck" %% "scalacheck" % "1.12.4" % "test"


testFrameworks += new TestFramework("custom.framework.MyTest")
testListeners += ??? // This line does not compile

My MyTest.scala class is located in the projectsfolder under projects/custom/framework/MyTest.scala and looks like this:

import sbt.TestsListener._
import sbt.TestReportListener._

class MyTest extends TestReportListener {

    def doInit(): Unit = {
        println("Starting Test")
    }

    def testEvent(event: TestEvent): Unit = {
        println(event.result.get)
    }

    def startGroup(name: String): Unit = {
        println("Group Started")
    }
}

The documentation here is sparse, and i'm obviously missing something. It states that i need to

Specify the test reporters you want to use by overriding the testListeners method in your project definition. Where customTestListener is of type sbt.TestReportListener.

This should be possible by doing testListeners += customTestListener. Since my class MyTest extends TestReportListener i thought i could just do testListeners += custom.framework.MyTest or testListeners += new custom.framework.MyTest, but this clearly is not the case.

I'm running sbt test to execute the tests.

Does anyone know how this is done correctly?

PhpUnit throws catched exception

I have following php code which i want to test.

public function save(Object $test)
{
    try {
       $test->doSomething();
    } catch (\Exception $e) {
        error_log($e);
        return false;
    }
    return true;
}

The method doSomething() eventually throw an exception, thats why its surrounded with a try and catch.

public function testSaveOnError()
{
    $testObject= new Object();
    $mock
        ->expects($this->once())
        ->method('save')
        ->with($testObject)
        ->willThrowException(new \Exception());

    $returnData = $object->save($testObject);
    $this->assertFalse($returnData);
}

The test runs green but the catched exception is thrown and logged in the phpunit console. Is there a way to disable exception tracing in UnitTests or am i doing anything wrong?

Mockito: How can i mock private final fild? [duplicate]

This question already has an answer here:

i'm using Mockito-Framework for mocking and i try to moch the privet field. But it doesn't work.

How can i mock private final fild with Mockito?

i have the folowing source code.

public class HibernatePoiDAO {


private final HibernateDriverDAO driverDAO;
private final HibernateDriverLocationDAO driverLocationDAO;

@Autowired
public HibernatePoiDAO(final SessionFactory sessionFactory) {
    this.driverDAO = new HibernateDriverDAO(sessionFactory);
    this.driverLocationDAO = new HibernateDriverLocationDAO(sessionFactory);
}

public List<Poi> findWithin(final Float p1Lat,
                            final Float p1Lon,
                            final double radiusInMeter,
                            final Double rating,
                            final OnlineStatus onlineStatus,
                            final Integer manufacturingYear,
                            final Integer seats) {
    //I want mock this method call
    List<DriverLocation> driverLocationsWithin = driverLocationDAO.findWithin(new GeoCoordinate(p1Lat, p1Lon), radiusInMeter);


    List<Filter<Driver>> filters = this.createFilters(rating, onlineStatus, manufacturingYear, seats);
    final Map<Driver, DriverLocation> filteredDriverLocationsWithin = this.filters(filters, driverLocationsWithin);
    List<Driver> drivers = new ArrayList<Driver>(filteredDriverLocationsWithin.keySet());
    Map<Long, DriverLocation> driverLocationMap = this.toDriverLocationMap(new ArrayList<DriverLocation>(filteredDriverLocationsWithin.values()));

    return this.makePoiList(drivers, driverLocationMap);
}


}

I try so:

   final HibernateDriverLocationDAO driverLocationDAO=Mockito.mock(HibernateDriverLocationDAO.class);
            Mockito.when(driverLocationDAO.findWithin(
            Matchers.any(GeoCoordinate.class),
            Matchers.any(Double.class)
            )).thenReturn(driverLocations);


    final HibernateTaxiPoiDAO controller=new HibernateTaxiPoiDAO(this.sessionFactory);

    // WHEN
    List<Poi> pois=controller.findWithin(53.5f,9.53f,20000,5.0D,null,null,null);

but it does not work. what do i wrong?

Angular unit test vs integration test

I've recently started writing unit tests for angular app I'm working on. There is one thing that I'm not sure about and that's a difference between unit test and integration test in context of Angular.

Assuming that I have a controller to test which depends on another (non angular) service, should I create a mock of a service or try to use real service when it's possible.

If I inject the service itself doesn't that mean I'm creating an integration test instead of a unit test?

I'm asking about that because my work colleagues keep writing tests that inject real servicesand still call them unit tests. It sucks big time especially when you have to debug errors from injected services in tests and each service depends on 5 other services...

Django Test Runner Not Finding Test Methods

I recently upgraded from Django 1.4 to 1.9 and realized something weird was going on with my tests. Here is the project structure:

project
  manage.py
  app/
    __init__.py
    tests/
      __init__.py
      test_MyTests.py

The test_MyTests.py file looks like this:

from django.test import TestCase

class MyTests(TestCase):
    def test_math(self):
        self.assertEqual(2, 2)

    def test_math_again(self):
        self.assertEqual(3, 3)

The test runner can find all of the tests when I run ./manage.py test app or ./manage.py test app.tests. However when I try running ./manage.py test app.tests.MyTests I get:

  File "/usr/lib/python2.7/unittest/loader.py", line 100, in    loadTestsFromName
parent, obj = obj, getattr(obj, part)
AttributeError: 'module' object has no attribute 'MyTests'

If I change the test class name to test_MyTests I can then run ./manage.py test app.tests.test_Mytests and it will find all tests. I was reading the Django docs though and it seems the file name and class name don't have to be the same. In either case that I showed above I still can't run individual tests like this, ./manage.py test app.tests.MyTests.test_math

I would like to be able to run individual tests and test classes, can someone help me here? Thanks.

How to write a good software documentation

I am wondering how software documentation fits in a real production process. For example, I know that it is a good practice that a code is covered with unit tests. The code is written, then sent to a testing department. Tester sees the design document and writes tests depending on this design document. Now there are questions concerning a program written in C:

  • what should the design document contain? what it should not contain?
  • at which point of the development should I write a documentation?
  • how to make it useful so other people can really use it, for writing unit tests for example?
  • Do I need to create a `separate design document for a "feature" or for every, for example, C function?

What would be the most basic Mac Mini to test LibGDX apps on for iPhone?

I am developing apps for Android using LibGDX, and I would like to publish them for iPhone as well. I do not own a Mac and I heard that a Mac Mini would be a cheap solution to develop for iPhone.

What are the minimum requirements for a Mac Mini to test apps on? I would be primarily developing on Windows, so my idea would be that it's okay if the Mac is not the fastest one. I just need it primarily for testing.

I would like to know the minimum CPU, memory, O/S, and software that is necessary for testing.

Unit Code testing of internal functions

I am a newbie on testing and so I am stumbling about testing internal functionality on some code parts. How to test ONLY the privateParseAndCheck and/or privateFurtherProcessing functionality with different input, but I dont want it as public functions?

-(BOOL) publicFunction() { //some stuff with network NSError* error; NSData* data = load(&error); //now I got data and parse and check BOOL result = privateParseAndCheck(data, error, ...);

if( result ) { privateFurtherProcessing(); } return result; }

Is re-writing the code the solution? I am also interested in some experiences with the tips/solutions on Xcode Server.

Test Driven Development Practical Solution(Industry Expereince) in existing projects and new projects

I am a bit confused with TDD concept and it's practical application in organizations, I am highlighting the word practical so please don't give me theoretical answers.(like write test first,fail it, just pass it, add new functionality, then refactor code, repeat cycle). This will be annoying.

I am asking from practical application in Software industry.Guys who are doing software development in IT industry, please share your experience on TDD and how you implemented it practically in existing project and also in new projects.

These days company stresses much on TDD required for a developer role, but I don't think practically the theoretical definition of TDD is followed. Mostly what I think on TDD is every class we write should be able to be unit tested in isolation.Necessarily it means each application class has a unit test class.

Please share your practical industry thoughts and how you applied TDD in software projects.

Testing with dependencies (TestNG)

I have these 2 tests:

@Test
public void Test1() throws Exception { ... }

@Test
public void Test2() throws Exception { ... }

I would like that Test2 will run after Test1, and only if Test1 was successful.

How can I achieve this in TestNG?

mercredi 27 janvier 2016

How to UI test iOS local/remote notifications that show up on the lock screen using XCTest

I need to be able to UI test interacting with notifications that appear on the lock screen. After much researching I have not been able to find anything for XCTest supporting this. What should be used to test this?

More Test cases in Unit or SIT?

I had below question in one of the exam.

During software application testing, In which of these testing one would see more test scenarios written,Unit or SIT?

MonkeyTalk + rake iOS testing

Need to write rake script, which runs several tests(.mt) on different iOS simulators(and iOS versions), and keeps all tests isolated from each other. I can't find any examples of using rake + MonkeyTalk. Will be grateful for any samples or at least advices.

How to make Travis CI to install Python dependencies declared in tests_require?

I have Python package with setup.py. It has regular dependencies declared in install_requires and development dependencies declared in tests_require, e.g. flake8.

I thought pip install -e . or running python setup.py test will also install my development dependencies and they'll be available. However, apparently they're not and I struggle to setup my Travis CI build right.

install:
  - "pip install -e ."
script:
  - "python setup.py test"
  - "flake8"

Build configured as above will fail, because flake8 will not be found as a valid command. I also tried to invoke flake8 from inside of the python setup.py test command (via subprocess), but also without success.

Also I hate the fact that flake8 can't be easily made integral part of the python setup.py test command, but that's another story.

Play framework specs2 weird matcher syntax

In Play framework specs2 testing I have these lines...

    new WithApplication {
      val homeResponse = route(FakeRequest(GET, "/")).get
      val resultType = contentType(homeResponse)
      resultType.must( beSome.which( _ == "text/html") )
    }

^ This works, but when I pull " beSome.which( _ == "text/html") " into a separate variable...

    new WithApplication {
      val homeResponse = route(FakeRequest(GET, "/")).get
      val resultType = contentType(homeResponse)
      val textTypeMatcher = beSome.which( _ == "text/html")
      resultType.must( textTypeMatcher )
    }

^ Type Mismatch.

expected: Matcher[Option[String]], actual: OptionLikeCheckedMatcher[Option, Nothing, Nothing] ^

What is going on here?

OCaml: How to test my own regular expression library

I made a simple regular expression engine, which supports concatenation, alternation, closure, and char a .. z. Source code is here

The way I represent nfa and dfa is to use record:

type state       = int with sexp, compare
type alphabet    = char with sexp, compare
type transaction = state * alphabet option * state with sexp, compare
type d_transaction = state * alphabet * state with sexp, compare

type state_set = State_set.t
type states_set = States_set.t

type nfa = {
  states       : State_set.t ;
  alphabets    : Alphabet_set.t ;
  transactions : Transaction_set.t; 
  start_state  : state;
  final_states : State_set.t;
}


type dfa = {
  d_states       : State_set.t ;
  d_alphabets    : Alphabet_set.t;
  d_transactions : D_Transaction_set.t ;
  d_start_state  : state ;
  d_final_states : State_set.t;
}

For example, a string "a*" will be parsed into Closure (Char 'a'), and then is transformed to nfa: states: 0 1 2 3 alphabets: a transactions: 0->e->1, 1->a>2, 2->a->3, 2->e->1, 0->a->3 start_start: 1 final_states: 4

then dfa:

states: 0 1 alphabets: a transactions: 0->a->1, 1->a->1 start_start: 1 final_states: 0 1

However, I use a lot of recursion in my code. The way my program generates state number for each node in nfa and dfa is realy unpredictable. I have no idea how to verify if the generated dfa is correct without using a pen and a piece of paper to test it myself

I am trying to figure out a better way to test my code so I can add more features into my program in the future.

Can anyone please give me some suggestions?

Testing loops in rspec

I need test loop

def some_method(param) do
 loop do
  some_magic_code
 end
end

I need to know how to test some_magic_code but i don't now how stub loop. Now my rspec test go in infinite loop.

How stub a method on a model copy in rspec?

Say I have the next code:

class Foo < ActiveRecord::Base

  def self.bar
    all.map(&:bar)
  end

  def bar
    # do something that I want stub in test
  end
end

Now I want to create test (Rspec):

foo = Foo.create
expect(foo).to receive(:bar)
Foo.bar

This test does not pass because Foo.bar calls other instance of the same model foo.

I wrote some complex code in such situations before, like:

expect_any_instance_of(Foo).to receive(:bar)

but this is not good, because there are no confidence that foo receives message (there could be several instances). And also expect_any_instance_of is not recommended by Rspec maintainers.

How do you test such code, is any best practice?

Dynamically skip particular tests in testng

I'm trying to skip particular tests based on some condition using annotation transformers http://ift.tt/1PjlaUv

Here is my class

public class MyDisabledTest implements IAnnotationTransformer {


    @Test(enabled=false)
    public void testDisabled() {
        System.out.println("This is a disabled test");
    }

    @Override
    public void transform(ITestAnnotation annotation, Class testClass,
            Constructor testConstructor, Method testMethod) {
        // TODO Auto-generated method stub
        if ("testDisabled".equals(testMethod.getName())) {
            annotation.setEnabled(true);
        }
    }
}

Here's my xml

<suite name="MyTestNGSuite" parallel="methods"
    data-provider-thread-count="3" thread-count="3">
  <listeners>
    <listener class-name="com.walmart.gls.demo_package.MyDisabledTest"/>
  </listeners>
  <test name="MyTestNGTest">
    <classes>
      <class name="MyDisabledTest">
        <methods>
          <include name="testDisabled" />
        </methods>
      </class>
    </classes>
  </test>
</suite>

I'm trying to enable the test which has been disabled at compile time. What am I getting wrong here? I'm not getting anything printed on console.

is numpy testing assert_array_less working correctly with np.inf?

I found this strange behaviour when testing arrays with infinite type.

This works good:

In [91]: np.testing.assert_array_less(5, 6)

In [92]: np.testing.assert_array_less(5, np.array([6]))

In [93]: 5 < np.inf
Out[93]: True

but when using numpy testing module it occurs that 5 is not less than inf:

In [94]: np.testing.assert_array_less(5, np.array([np.inf]))
---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-93-c43a15aa8a1a> in <module>()
----> 1 np.testing.assert_array_less(5, np.array([np.inf]))

c:\python27\lib\site-packages\numpy\testing\utils.pyc in assert_array_less(x, y, err_msg, verbose)
    911     assert_array_compare(operator.__lt__, x, y, err_msg=err_msg,
    912                          verbose=verbose,
--> 913                          header='Arrays are not less-ordered')
    914
    915 def runstring(astr, dict):

c:\python27\lib\site-packages\numpy\testing\utils.pyc in assert_array_compare(comparison, x, y, err_msg, verbose, header, precision)
    629             if any(x_isinf) or any(y_isinf):
    630                 # Check +inf and -inf separately, since they are different
--> 631                 chk_same_position(x == +inf, y == +inf, hasval='+inf')
    632                 chk_same_position(x == -inf, y == -inf, hasval='-inf')
    633

c:\python27\lib\site-packages\numpy\testing\utils.pyc in chk_same_position(x_id, y_id, hasval)
    606                                 % (hasval), verbose=verbose, header=header,
    607                                 names=('x', 'y'), precision=precision)
--> 608             raise AssertionError(msg)
    609
    610     try:

AssertionError:
Arrays are not less-ordered

x and y +inf location mismatch:
 x: array(5)
 y: array([ inf])

Why numpy checks if infs are on the same positions? Is it desired behaviour?

In [99]: np.__version__
Out[99]: '1.9.3'

how to test the micro-services rest api project

I new to micro-services project, could you please guide me how to test the micro-services and how to run the automation testing against micro-services project. what are the tool available to test. what are the area i have to focus during testing

write test cases for cala route using spray-client using mockito

My application is consuming some webservice using spray-client. In appilcation we are using route and route using spray-client for web service call.Now we suppose to test our route and mock the spray-client.any idea how would I mock the spray-client while exactly using route for that calls?

Howto create multiple instances of a browser in Protactor and still use Page Objects

I am trying to create multiple browser logins during my end-to-end tests with Protactor. Unfortunately I can't seem to find out howto setup a second browser and still use page objects.

credentials.js:

var credentials = {
    user1: {
        username: 'my@myselfandi.org',
        password: 'boomchickiwawa',
        userId: 1
    },
    user2: {
        username: 'you@yourselfandyou.org',
        password: 'boomchickiwawa',
        userId: 2
    }
};

module.exports = credentials;

login.page.js:

var LoginPage = function () {
    this.url = 'https://my-site/#/login';

    this.el = {
        user: element(by.model('data.username')),
        pass: element(by.model('data.password')),
        submit: element(by.css('button[type="submit"]')),
        error: element(by.css('.error')),
        popup: element(by.css('.popup-container.popup-showing.active'))
    };

    this.open = function() {
        browser.get(this.url);
    };

    this.currentUrl = function() {
        return browser.getCurrentUrl();
    };

    this.doLogin = function(username, password) {
        this.el.user.clear();
        this.el.pass.clear();
        this.el.user.sendKeys(username);
        this.el.pass.sendKeys(password);
        this.el.submit.click();
    };
};

module.exports = LoginPage;

contact.page.js:

var ContactPage = function () {
    this.url = 'https://my-site/#/app/tab/contact';

    this.open = function() {
        browser.get(this.url);
    };

    this.currentUrl = function() {
        return browser.getCurrentUrl();
    };
};

module.exports = ContactPage;

login.spec.js:

var credentials = require('./config/credentials.js');
var LoginPage = require('./pages/login.page.js');
var ContactPage = require('./pages/contact.page.js');

describe('[E2E] Login', function() {
    var loginPage = new LoginPage();
    var contactPage = new ContactPage();

    it('should redirect to login page if trying to view a contact page as anonymous user', function() {
        contactPage.open();
        expect(loginPage.currentUrl()).toBe(loginPage.url);
    });

    it('should show an error when login is incorrect', function() {
        loginPage.doLogin('wronguser@wrongmail.nl', 'wrongpass');
        expect(loginPage.el.error.isPresent()).toBe(true);
        expect(loginPage.el.popup.isPresent()).toBe(true);
    });

    it('should close the NOT AUTHORIZED popup', function() {
        loginPage.el.popup.all(by.css('.button')).click();
        expect(loginPage.el.popup.isPresent()).toBe(false);
    });

    it('should be able to log in', function() {
        loginPage.doLogin(credentials.user1.username, credentials.user1.password);

        expect(loginPage.el.error.isPresent()).toBe(false);
        expect(loginPage.el.popup.isPresent()).toBe(false);
        expect(loginPage.currentUrl()).toBe(contactPage.url);
    });

    it('should be able to create a 2nd login', function() {
        /*
         Normally I would setup a second browser like so:
         var browser2 = browser.forkNewDriverInstance(true);
        */
    });

});

All test now pass, but how do I create a second browser with respect to the page objects? I tried to pass the browser instance to the page objects, but that is failing.

Rerty teamcity build if tests failed with some threshold

Is there any option or plugin to rerun/trigger teamcity build if tests were failed with some threshold? E.g.:

  • 10 tests from 100 failed - do not retry build
  • 11 tests from 100 failed - trigger build once again

Run a specific test in a single test class with Spock and Maven

I am using Spock framework for testing (1.0-groovy-2.4 release). Junit offers this option to run a specific test using the command line (with Maven):

mvn -Dtest=TestCircle#mytest test

How can I do this with Spock?

This version has a dependency on Junit 4.12, it is stated in Junit documentation that this feature is supported only for Junit 4.x, basically Spock should propose something similar.

C# unit test lambda

namespace ConsoleApplication2
{   
    public class SomeClass
    {        
        delegate int Calculate(int a, int b);
        delegate void Dosomething();

        public void test()
        {
            Calculate cal = (a, b) => a + b;
            Console.WriteLine(cal(1, 3));
            Dosomething Doit = () =>
            {
                Object[] l_ICDInfo = new Object[3];
                Object[] l_ICDValue = new Object[3;

                l_ICDInfo[0] = 1;
                l_ICDInfo[1] = 2;
                l_ICDInfo[2] = 3;
                l_ICDValue[0] = 3;
                l_ICDValue[1] = 2;
                l_ICDValue[2] = 1;
                Console.WriteLine("do any things");
            };
               // Doit();
        }
}
///////////////////////Unit test code///////////////////////
namespace UnitTestProject2
{
    [TestClass]
    public class UT_lmc_test
    {
        delegate int Calculate(int a, int b);
        delegate void Dosomething();
        [TestMethod]
        public void UT_lmc_test_001()
        {
             ConsoleApplication2.SomeClass test_var=new ConsoleApplication2.SomeClass();                
             test_var.test();         
        }
    }
}

if somebody know how to test lambda expression using unit test code, plz show me how to do it i've tried lots of things but i couldn't find....

mardi 26 janvier 2016

Protractor writing a cleaner test cases without using browser.sleep

Im new to protractor and jasmine and I use lot of browser.sleep to make my test cases work

it('Procedure tab-', function() {

        element(by.linkText('Medical History')).click();
        browser.sleep(500)
        element(by.linkText('Personal History')).click();
        browser.sleep(200)
        element(by.linkText('Procedure')).click();
        browser.sleep(500)
        element(by.css('[data-ng-show="ptab.index  === 1"] > [profile="profile"] > #medicalhistory > .card > [header="header"] > .card-header-bg > .title-header > .row > [ui-sref=".procedure.new"] > [data-ng-hide="important"]')).click();
        browser.sleep(500)
        $('label[for="dis4Appendicitis"]').click();
        browser.sleep(2000)
    })

What could be more efficient way to write a test case without using browser.sleep........I have been using sleeps because of slower internet connectivity etc....

Any help is appreciated

Soft keyboard not present, cannot hide keyboard - Appium android

I am getting following exception:

 org.openqa.selenium.WebDriverException: An unknown server-side error occurred while processing the command. (Original error: Soft keyboard not present, cannot hide keyboard) (WARNING: The server did not provide any stacktrace information)
    Command duration or timeout: 368 milliseconds

I am using driver.hideKeyboard() to hide soft input keyboard that is open on the screen.
How to ensure that keyboard is open before hiding it? OR any other workaround?

Getting Toolbar's title with Android's UI integration test framework

I have an fragment which set's the title to the entity the user is currently editing as follows:

        Activity activity = this.getActivity();
        CollapsingToolbarLayout appBarLayout = (CollapsingToolbarLayout) activity.findViewById(R.id.toolbar_layout);
        if (appBarLayout != null) {
            appBarLayout.setTitle(mItem.name);
        }

In the testing code I successfully navigate to the fragment, however I'm unable to locate the element by the text as follows:

class DetailWrapper {

    protected UiObject2 find( BySelector selector ){
        device.wait(Until.findObject(selector), 2 * 1000);
        return device.findObject(selector);
    }

    public ItemView hasName(String itemName) throws Exception {
        UiObject2 title = find(By.text(itemName));
        assertEquals(title.getText(), itemName);
        return this;
    }
}

title in hasName(String) is always null in the assert despite being on the correct activity with the correct title. What is the best method to assert the AppBar's title?

Green test is failing when is launched by maven

I have test which is green when it is launched directly through IDE.

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations = {"classpath:dao/daoTest-context.xml"})
@ActiveProfiles(profiles = "test")
public class ClientDaoTest {

@Autowired
ClientDao clientDao;

@Test
public void shouldSaveClientInDBTest(){
    Client client = new Client();
    client.setName("ivan");

    client = clientDao.saveClient(client);

    assertNotNull(client.getId());
    assertEquals("ivan", client.getName());
}


}

daoTest-context.xml

<beans xmlns="http://ift.tt/GArMu6"
   xmlns:xsi="http://ift.tt/ra1lAU"
   xsi:schemaLocation="http://ift.tt/GArMu6 http://ift.tt/1jdM0fG">

<import resource="classpath:spring-context.xml" />

<beans profile="test">
    <bean id="ConnectionDB" name="ConnectionDB" class="org.mockito.Mockito" factory-method="mock">
        <constructor-arg value="biz.podoliako.carwash.services.impl.ConnectDB" />
    </bean>

    <bean id="myDataSourceTest" class="org.springframework.jdbc.datasource.DriverManagerDataSource" >
        <property name="driverClassName" value="org.h2.Driver"/>
        <property name="url" value="jdbc:h2:~/test"/>
        <property name="username" value=""/>
        <property name="password" value=""/>
    </bean>

    <bean id="entityManagerFactory"  class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
        <property name="dataSource" ref="myDataSourceTest"/>
        <property name="packagesToScan" >
            <list>
                <value>biz.podoliako.carwash</value>
            </list>
        </property>

        <property name="jpaVendorAdapter">
            <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
                <property name="generateDdl" value="true" />
                <property name="showSql" value="true" />
            </bean>
        </property>

        <property name="jpaProperties">
            <props>
                <prop key="hibernate.dialect">org.hibernate.dialect.MySQLDialect</prop>
                <prop key="hibernate.show_sql">true</prop>
                <prop key="hibernate.hbm2ddl.auto">create</prop>
                <prop key="hibernate.connection.pool_size">1</prop>
            </props>
        </property>
    </bean>
</beans>

However when maven command "mvn test" run this test it throws "Tests in error: shouldSaveClientInDBTest(dao.ClientDaoTest): Failed to load ApplicationContext" and the main explanation is

Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'ConnectionDB' defined in class path resource [dao/daoTest-context.xml]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [java.lang.Object]: Factory method 'mock' threw exception; nested exception is org.mockito.exceptions.misusing.UnfinishedVerificationException: Missing method call for verify(mock) here: -> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

Example of correct verification: verify(mock).doSomething()

Also, this error might show up because you verify either of: final/private/equals()/hashCode() methods. Those methods cannot be stubbed/verified.

Please help me to find out what I did wrong.

Angular 2 AND Karma Test AND Async HTTP

so my problem is very easy to explain

this is my test spec

import {
  describe,
  expect,
  it,
  inject,
  beforeEachProviders
} from 'angular2/testing_internal';
import {RestClient} from './rest.service';
import 'rxjs/add/operator/toPromise';
import 'rxjs/add/operator/delay';
import {
  HTTP_PROVIDERS
} from 'angular2/http';
export function main() {
   describe('RestClient Service', () => {
      beforeEachProviders( () => [HTTP_PROVIDERS, RestClient] );
      it('is defined', inject( [RestClient], (client) =>{
         client.get('http://ift.tt/17vsUzp')
         .delay(2000)
         .toPromise()
         .then((res) => {
            console.log('test');
            expect(res.length).toBeGreaterThan(1000);
         });
       }));
    });
  }

and this is the method in the "RestClient" class that return an Observable

public get(url:string): Observable<any> {
    return this.http.get(url).map(res => res.json());
}

So, i start the test and the test return

START:
LOG: 'ciao'
RestClient Service
    ✔ is defined
PhantomJS 2.0.0 (Mac OS X 0.0.0) LOG: 'ciao'

Finished in 0.026 secs / 0.038 secs

SUMMARY:
   ✔ 2 tests completed

For Karma all work well and the test is passed correctly and is not true, and at the same time if i put a console.log into the "then" never is called. Som i suppose that is a problem with Async calls, do you have any idea howto test in Angular2 Async Calls i have used Inject and AsyncInject too. I know that i can use a MockBackend but i need to test with external urls

thanks in advance for your help

What is the actual flow for developing a feature through testing with BDD?

-Scroll further down for the actual question-

Hello, this question is intended towards understanding best practices for developing your projects. I've looked through plenty of projects on Github and they all seem to create a test suite for a particular module. e.g

Project tree:

./lib/module_a.js
./lib/module_b.js
./tests/module_a.js
./tests/module_b.js

I have yet to come by someone performing a unit test by utilizing for instance mocha.js. Most if not all tests are of type integration or functional. To me it just seems like everyone is just doing stuff on a whim without following any actual models or following it but doing a pretty poorly job doing it.

I initially had the misconception that you make test suites for features only, and that you divide up your types of testing such as unit, integration and functional into separate test suites - but this does not seem to be the case at all.

Note: I do comprehend the concept of all the four tests as described in the following post: What's the difference between unit, functional, acceptance, and integration tests?

The Actual Question:

I would really like to know how one would go about developing a feature, through the following flow; by working your way through unit testing followed by integration testing, functional testing and optionally acceptance testing. How that would actually look like using something like mocha.js as a testing tool and by following the behaviour driven development model (BDD).

  • Don't forget to include the directory structure for deeper understanding of what's going on.

Bonus: if possible it would sure be great if you can also use the git flow model as described in this following page: http://ift.tt/mixx0f e.g 1) Creating your new project. 2) Adding the initial feature to that project through testing. 3) Releasing 0.1.0. 4) adding the second feature through testing 5) Releasing 0.2.0.

Ultra Bonus: Upload and share a a super small and simple example project on Github, giving us the aforementioned questions answered. (preferably with node.js and mocha.js)

P.S This is my first post on Stack Overflow and generally the only question I believe many people need answered including myself. Thanks in advance.

testing application and software using metrices

I just came across this paper(http://ift.tt/1nNrIjU) suggesting that best way to evaluate framework of application is by Quality Tests consisting of Size Metrics: lines of code (LOC), number of statements, the comment lines and the ratio between comment lines and lines of code. Complexity Metrics: McCabe’s cyclomatic complexity (CC), Branches and Depth. Maintainability Metrics: Halstead metrics (Program Volume and Program Level) and Maintainability Index (MI). and than Validation Tests using Yasca (sourceforge.net/ projects/yasca) software utility, in combination with JavaScript Lint (javascriptlint.com). for Overall errors include critical, high, low severity errors and informational errors. There is no proper explaination why this test are best I wanted to take it as my research topic for master but wanted a second suggestion ..

Where to find sample C code to use for testing

I would like to find varied samples of valid c code to test some simple parsing and compiling programs for c. Is there anywhere online where I can find this?

Why do we need a Selenium Grid plugin?

There is this plugin for Jenkins.

However, I don't really see the benefit of this. Why can't I have my own Selenium Grid and run my tests directly instead of a plugin? Is this not supported somehow?

Let's say that my project invokes a hub with 2 nodes (outside of Jenkins), and I build this project with Jenkins. Will it not work?

Testing a srouce file

How can I run test cases for a void function in C++ ? For exemple, if I have 100 test cases and make a for(i=1;i<=100;i++) and I run the function 100 times, the program enters infinite loop .

10000 users jmeter Distributed Testing

I am running a Test with 10000 users using Jmeter. Client requirement is "10,000 login in around 30 minutes (1800 seconds) which means about 5/second" I set up the distributed testing with 10 slave systems excluding master system. I have 5 https requests per user. My test plan is as follows.

Test Plan Thread group Simple controller -Hello -authenticate -get resource list -allocate RESOURCE -Bye CSV Data Config HTTP Cookie manager HTTP Request Defaults Http Header manager User defined variables View results tree View results in table Summary Report

In my master system following are the inputs.

Number of Users: 1000 (1000*10 SLAVE SYSTEMS=10000) Ramp up period(in seconds): 1800 seconds

Loop count: 1

Issues:

--> The response times are way higher even for the first hello reponse. --> And only a handful of "Authenticate" requests get executed until all the "Hello" requests are executed. (Same behaviour observed for the following https requests) --> Meanwhile when the turn of the execution of "Authenticate" request comes the error is ERROR_MISSING_SESSION.(for obvious reason, timeout=15mins)

please let me know where i am going wrong.

I am in need of quik help. Thank you for your time.

Debugging website on iOS chrome

Ive got a bug that only shows up in mobile chrome browser on my iphone - iOS8. Whats odd is that the same bug doesn't show up in iOS8 safari on the same phone.

I want to debug the issue in the chrome browser but can only find info on debugging safari, is there a way to debug ios Chrome ?

Akka Scala TestKit test PoisonPill message

Given that I have a Supervisor actor which is injected with a child actor how do I send the child a PoisonPill message and test this using TestKit?

Here is my Superivisor.

class Supervisor(child: ActorRef) extends Actor {

  ...
  child ! "hello"
  child ! PoisonPill
}

here is my test code

val probe = TestProbe()
val supervisor = system.actorOf(Props(classOf[Supervisor], probe.ref))
probe.expectMsg("hello")
probe.expectMsg(PoisonPill)

The problem is that the PoisonPill message is not received. Possibly because the probe is terminated by the PoisonPill message?

The assertion fails with

java.lang.AssertionError: assertion failed: timeout (3 seconds) 
during expectMsg while waiting for PoisonPill

How to use brower.get() in protractor

I was using Protractor to test dropdown selection. The code is following:

var productCountry = element(by.model('main.productCountry1'));
productCountry.click();
var country = element.all(by.repeater('country in countries'));
country.get(5).click();
browser.sleep(500);

The error is:

Stack:
ElementNotVisibleError: element not visible
  (Session info: chrome=44.0.2403.81)`

I used code to see the data inside array [names]:

var names =element.all(by.repeater('country in countries').column('country.name'));
names.getText().then(function(txt){
    console.log(txt);
});

And the result shows the array looks like this:

['','', '', '', '', .....,'','Kenya','Kiribati',....,'','','','']

Only can get a part of elements, other elements are all empty. And I run several times,each time the empty parts are different. I used the first line code

browser.get('dist/#');
browser.get('app/index.html');

to get into the webpage. If I run the above second line code, there is a 'Not Found' shown on the website,and the error is:

Stack:
Error: Failed: Error while running testForAngular: asynchronous script timeout: result was not received in 11 seconds
  (Session info: chrome=44.0.2403.81)

Does this caused the element empty? The example I saw online always used the second line browser.get(). Is there any one used the first method?

I found If changed the data in $scope.country in scr/app/tab/main.controller, when I run gulp serve, the options in selection changed, but when I run protractor and get from the dist/#, the options are not changed. I guess it maybe caused the element empty. And I used yeoman to generate the files.