mardi 31 janvier 2017

Setting up Django development, test, and production environments

I am setting up a CentOS server with Apache and it got me wondering about how to setup development, testing, and production environments and how to setup the settings.py files.

For the development environment, I was going to develop on the local computer, use the Postgres test database on the CentOS machine, and use runserver as the web server. When done coding, I was going to push to Git.

For the testing environment, was going to pull, or clone, from Git to the CentOS server. I was going to have the test app running on 8080 and using the Postgres test database.

For the production environment, would run on 80 and use the production Postgres database.

Now these questions might seem to have pretty obvious answers, but feel the need to ask for clarity:

1) The production and testing directories will need to be completely separated from one another, correct? Obviously if they share the same directory and issues that come up during testing will be present in the production env which would be bad.

2) In Apache, it would need to be setup to serve the test environment on 8080 and point to the test directory for the test app. Then there would need to be the one for production on port 80 point at the production directory for the production app, correct?

Just looking to confirm my thinking on the subject.

Main Method to be used by Test Class

Hi so i have my main class but im having trouble figuring out how to output my code from my test class. I dont understand even though tried many ways how to output the simple addition and subtraction of two fractions as it should do in my main method but can't seem to get it into my test class.

here is my code for the class with all the functions:

package rational;

 public class Rational {

private int numer, denom;

 //constructors
    public Rational(){
        int num = 1;
        int den = 2;
        reduce();
    }
    public Rational(int num, int den){
    numer = num;
    denom = den;
    reduce();
    }
    public Rational(Rational x){
    numer = x.numer;
    denom = x.denom;
    reduce();
    }

   //setters
    public void setNumer(int num){
    numer = num;
    reduce();
    }
    public void setDenom(int den){
    denom = den;
    reduce();
    }
    public void setRational(int num, int den){
    numer = num;
    denom = den;
    reduce();
    }

     //getters
    public int getNumer(){
    return numer;
    }
    public int getDenom(){
    return denom;
    }

    //Copy method
    public void copyFrom(Rational x){
    numer = x.numer;
    denom = x.denom;
    reduce();
    }

    //Equals method        
    public boolean equals(Rational x){
    if (numer / denom == x.numer / x.denom){
    return(true);
            }
    else {
    return(false);
        }
    }

    //Compare to method
    public int compareTo(Rational x){
    if (numer / denom == x.numer / x.denom){
    return (0);
    }
    else if (numer / denom < x.numer / x.denom){
    return (-1);
    }
    else{
    return (1);
        }    
    }

    //Find greatest common divisor
    static int gcd(int x, int y){
    int r;
    while (y != 0) {
    r = x % y;
    x = y;
    y = r;
        }
    return x;
    }

    //Rational Addition            
    public void plus(Rational x){
    int greatdenom = x.denom * denom;       
    int multx = greatdenom / x.denom;
    int mult = greatdenom / denom;
    denom = x.denom * denom;
    numer = (x.numer * multx) + (numer * mult);
    reduce();
    }

    //Rational Subtraction
    public void minus(Rational x){
    int greatdenom = x.denom * denom;       
    int multx = greatdenom / x.denom;
    int mult = greatdenom / denom;
    denom = x.denom * denom;
    if (x.numer > numer){
    numer = (x.numer * multx) - (numer * mult);
        }
    else {
    numer = (numer * mult) - (x.numer * multx);
        }
    reduce();
    }

     //Multiplication       
    public void times(Rational x){
    numer = numer * x.numer;
    denom = denom * x.denom;
    reduce();
    }

    //Division        
    public void divBy(Rational x){
    numer = numer / x.numer;
    denom = denom / x.denom;
    reduce();
    }

     //Fraction simplifier        
    private void reduce(){
    int divisor;
    divisor = Rational.gcd(numer, denom);
    numer = numer / divisor;
    denom = denom / divisor;
    }

@Override
    public String toString(){
    if (denom == 1){
    return numer + "";
    }
    else{
    return numer + " / " + denom;
    }       
}
   }

Angular 2 mocking an async service that calls another service

I'm learning Angular 2 testing (Karma, Jasmine). I already have a working test for an HTTP service, largely pulled from this Semaphore tutorial on services and testing. I have the test working right through the usual async(inject([MyService], ...

My actual program has a service wrapped in a service, as below.

@Injectable()
export class GlobalsService {
  private options: Option[] = [];
  error: any;

  constructor(private optionService: OptionService) { }

  public getGlobals(): void {
    let that = this;
    this.optionService
      .getOptions()
      .then(options => that.fillOptions(options))
      .catch(error => that.error = error);
  }
  [SNIP]

The optionService.getOptions() returns a Promise which is waited for, then fills the globalService.options list. The globalsService.getGlobals() is called either synchronously or in a place where the asynchronous (delayed) fill of its contents are hidden.

export class AppComponent implements OnInit {
  constructor(private globalsService: GlobalsService) { }

  ngOnInit() {
    this.globalsService.getGlobals();
  }
  [SNIP]

What I'm stuck at is how to call globalsService.getGlobals() in a testing context. I think I'm supposed to call it through async().

So far my mock OptionService is:

@Injectable()
export class MockOptionService {
  constructor() { }

  getOptions(): Promise<Option[]> {
    let options: Option[] = [
      { id: 'NY' } // truncated property list
    ];
    return Promise.resolve(options);
  }

}

I then am planning to call it through:

  it('should get Option objects async',
    async(inject([GlobalsService, MockOptionService], (globalsService: GlobalsService, optionService: OptionService) => {

      globalsService.getGlobals()
        .then(() => {
          expect(globalsService.getOptions().length).toBe(1);
        });      

However, my "smart" programmers editor (SublimeText) says that "Property 'then' does not exist on type 'void'.", leaving me unsure if I should have async(inject or just use a tick().

Comments, anyone?

Thanks, Jerome.

How to recursively test for all crates in under a directory?

Some projects include multiple crates, which makes it a hassle to run all tests manually in each.

Is there a convenient way to recursively run cargo test ?

How to access the `detailTextLabel` in a `tableViewCell` at UITests?

I wanna check whether there is a tableViewCell.detailTextLabel with a given string in my UITest. The problem is when I search for app.tables.cells.children(matching: .staticText) it will only look for labels that are tableViewCell.textLabel. Any ideas on how to query for the detailTextLabel?

extract all test classes and test cases for each file from script

Have anyone heard about how to get TestCases and tests, from UI / Unit test target?

build a scrapper is rather easy and tricky, i'm wondering if there is better method to do it.

i'd be awesome to get output like: UITests: TestCase1 - test_test1 - test_test2 - test_test3 UnitTests: TestCase1 TestCase1 - test_test1 - test_test2 - test_test3

Unit tests : mocking a tested function

I know about the basics about test doubles, mocking, etc. but I'm having problems to test the following

void funA(args...) {
    /* do some complicated stuff, using mocked functions */
}

I've written the unit tests for funA, checking the good functions were called (using their mocked implementation).

Now, I want to test this function

void funB(args...) {
    /* do some complicated stuff, and call `funA()` on some situations */
}

How can I be sure my funA function was called from funB? I can't add a fake implementation to funA, I need its production code so it can be tested.

What I am doing now is making sure the mocks that funA is calling are as I expect them to be. But it's not a good method, because it's like I'm testing funA all over again, when I just want to make sure funB does its job.

Change/Reduce LAN speed for test

I made simple program, my program implements TCP and other network stuff. I want to test my program's efficiency. To test it I need to simulate normal network (by normal I mean not very fast LAN). For my first test I need to change Local Network speed. I think that I need to change router's configuration (my router is TP-Link TD-W8960N), but i don't found any LAN speed config entry. I know that there are many Windows programs to control speed, but i don't want to install any additional software (i don't have much hard drive memory) - This option should be in system settings/router.

So do you know how to do this? Thanks for help, and any info.

Acces to config variable when testing a package in laravel

I'm writing a test in Laravel and I want to unittest this piece of code:

if (file_exists(\Config::get('maintenance.dir.api'))) {
        throw new ServiceUnavailableException('We are down for maintenance');
    }

I'm using Illuminate\Foundation\Testing\TestCase and I can't access to the config variables from the main app in the test. In my package, I don't have any config. Can I mock the config folder or something like that?

Thank you!

Need a complete example of API testing Selenium to fill a form

The requirement is to fill a lengthy form (with radio buttons, drop downs, text boxes, three-dotted buttons). Since UI testing is pretty time consuming, we have decided to move to API testing. Currently, we use Fitnesse, Selenium, Java, Maven and JUnit for the UI testing, but the idea is to remove Fitnesse from the scene, and move to API testing. I need a complete example of something like, new user sign up for gmail, where a form needs to be filled.

How to obtain some fake telegram accounts to test my project

I need three fake telegram accounts to test my project. Is it possible to create or obtain them someway? Or i have to buy three sim-cards and register three accounts?

Is JavaScript compatible with strict Page Object Pattern?

I have built various Test Automation frameworks using the Page Object Pattern with Java (http://ift.tt/19A2hZK).

Two of the big benefits I have found are:

1) You can see what methods are available when you have an instance of a page (e.g. typing homepage. will show me all the actions/methods you can call from the homepage)

2) Because navigation methods (e.g. goToHomepage()) return an instance of the subsequent page (e.g. homepage), you can navigate through your tests simply by writing the code and seeing where it takes you.

e.g.

WelcomePage welcomePage = loginPage.loginWithValidUser(validUser);
PaymentsPage paymentsPage = welcomePage.goToPaymentsPage();

These benefits work perfectly with Java since the type of object (or page in this case) is known by the IDE.

However, with JavaScript (dynamically typed language), the object type is not fixed at any point and is often ambiguous to the IDE. Therefore, I cannot see how you can realise these benefits on an automation suite built using JavaScript (e.g. by using Cucumber).

Can anyone show me how you would use JavaScript with the Page Object Pattern to gain these benefits?

how to do unit testing using the junit

I'm having a hard time creating a cascading drop down list that have a relationship, but when I remove the the relationship It works fine.. I have 3 links the vendor master, expense master and credit card page, The Vendor Master and Expense Master are in relationship when I create Vendor transaction it have a expense category field where in database it is the (ExpenseID) and link to the table of the expense table..

It works fine creating the vendor transaction but my problem here is in Credit Card Page where when I select the vendor it will populate the field of expense like in my statement above it have some kind of relationship. Thank You

Fixtures deleted after being generated by exec() function

I am working on a big Symfony2 project on which there is two kinds of test classes. Those where we expect the data to be manipulated, and those where the data should not be manipulated (due to credential insuffisance).

In this second kind of test classes I am trying to implement a public static function setUpBeforeClass() method in which the fixtures will be loaded once for all in my test class.

However, in order to do so, the only way I found is to :

  1. Create a "test classes" that only generate fixtures, like so :

    class ResetFixturesTest extends WebTestCase{
          public function testResetFixtures(){   
               $this->loadFixtures(array(
                    "Path\\to\\my\\fixtures\\classes"
               ));
          }
    }
    
    

    Note that I did not implement the $this->loadFixtures() function, it was created by a former employee. So I kinda "must" use this way of generating fixtures.

  2. I create my SetUpBeforeClass() function in DummyClassTest like this :

    public static function setUpBeforeClass() {
        parent::setUpBeforeClass();
        $process = new Process('phpunit -c app/ --filter testResetFixedFixtures');
        $process->run();
    }
    
    

Then I can launch my test class.

The problem is : When I launch my test class, lets say : phpunit -c app/ --filter DummyClassTest, my fixtures are first properly generated, from the setUpBeforeClass() method, but then, when it actually starts my tests from DummyClass, my fixtures disapears from my database. And so I get an error, because I need users from my database.

How can I solve that ?

Thank you.

Should I keep testing code or implement more functionality?

I have a side project that I love to code, I spend time with it when I can, since I'm still finishing my university studies. When I started it, I barely knew good programming practices and TDD among other things, I only coded it for fun.

Several iterations, refactors, improvements, and accumulated knowledge after, brought me to write unit tests and integration tests I could, before implementing new functionalities. However, I still don't have enough time to really do all the tests in order to have an acceptable code coverage... although the software works good.

So when I have time to spend in this project, I want to implement new functionalities (this time yes, doing the unit tests in parallel) not doing a lot of tests that, have to say are very boring, and many of them hard to do because of mocking and stuff...

Should I keep adding functionality or should I finish all the tests before?

I determined by this that the software should be in beta version until a reasonable code coverage is reached. At this time it's on version 0.9-beta.

If I add new functionality, should I follow the semantic version keeping the beta? For example, being the next iterations 0.10-beta, 0.11-beta and so on until the tests are done, when finally it would turn to non-beta versions.

If you want to check my project, here is the link: http://ift.tt/2jyqIO8

VSTS test agent is not running test build as X64 bit process, "no matching test is present" error is being displayed

We are running these test as part of VSTS pipline on Azure VM, test agent get deployed successfuly but run function test task fails. we are building whole solution containing test solution as x64 bit

Jenkins return job status FAILURE however it succeeds on command line

I am automating my protractor integration tests using Jenkins. I tested locally on the running Jenkins machine using npm run e2e-jenkins it runs normally. But when integrating it into Jenkins pipeline, it gives me error as below:

[33mStarting up http-server, serving[39m[36mdist[39m[33m
Available on:[39m
[32m  http://192.168.5.45:3000[39m
[32m  http://127.0.0.1:3000[39m
Hit CTRL-C to stop the server
[10:47:12] I/local - Starting selenium standalone server...
[10:47:12] I/launcher - Running 1 instances of WebDriver
[10:47:12] E/launcher - Server terminated early with status 2
[10:47:12] E/launcher - Error: Server terminated early with status 2
    at Error (native)
    at C:\Program Files (x86)\Jenkins\workspace\Pipeline frontend e2e\org.lhasalimited.vitic.frontend.web\node_modules\selenium-webdriver\remote\index.js:242:20
    at ManagedPromise.invokeCallback_ (C:\Program Files (x86)\Jenkins\workspace\Pipeline frontend e2e\org.lhasalimited.vitic.frontend.web\node_modules\selenium-webdriver\lib\promise.js:1379:14)
    at TaskQueue.execute_ (C:\Program Files (x86)\Jenkins\workspace\Pipeline frontend e2e\org.lhasalimited.vitic.frontend.web\node_modules\selenium-webdriver\lib\promise.js:2913:14)
    at TaskQueue.executeNext_ (C:\Program Files (x86)\Jenkins\workspace\Pipeline frontend e2e\org.lhasalimited.vitic.frontend.web\node_modules\selenium-webdriver\lib\promise.js:2896:21)
    at asyncRun (C:\Program Files (x86)\Jenkins\workspace\Pipeline frontend e2e\org.lhasalimited.vitic.frontend.web\node_modules\selenium-webdriver\lib\promise.js:2775:27)
    at C:\Program Files (x86)\Jenkins\workspace\Pipeline frontend e2e\org.lhasalimited.vitic.frontend.web\node_modules\selenium-webdriver\lib\promise.js:639:7
    at process._tickCallback (internal/process/next_tick.js:103:7)
[10:47:12] E/launcher - Process exited with error code 199

npm ERR! Windows_NT 6.1.7601
npm ERR! argv "C:\\Program Files\\nodejs\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\npm\\bin\\npm-cli.js" "run" "protractor" "config/protractor.jenkins.conf.js"
npm ERR! node v6.9.4
npm ERR! npm  v3.10.10
npm ERR! code ELIFECYCLE
npm ERR! vitic-frontend@0.0.1 protractor: `protractor "config/protractor.jenkins.conf.js"`
npm ERR! Exit status 199
npm ERR! 
npm ERR! Failed at the vitic-frontend@0.0.1 protractor script 'protractor "config/protractor.jenkins.conf.js"'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the vitic-frontend package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR!     protractor "config/protractor.jenkins.conf.js"
npm ERR! You can get information on how to open an issue for this project with:
npm ERR!     npm bugs vitic-frontend
npm ERR! Or if that isn't available, you can get their info via:
npm ERR!     npm owner ls vitic-frontend
npm ERR! There is likely additional logging output above.

npm ERR! Please include the following file with any support request:
npm ERR!     C:\Program Files (x86)\Jenkins\workspace\Pipeline frontend e2e\org.lhasalimited.vitic.frontend.web\npm-debug.log
ERROR: "protractor config/protractor.jenkins.conf.js" exited with 1.

npm ERR! Windows_NT 6.1.7601
npm ERR! argv "C:\\Program Files\\nodejs\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\npm\\bin\\npm-cli.js" "run" "e2e-jenkins"
npm ERR! node v6.9.4
npm ERR! npm  v3.10.10
npm ERR! code ELIFECYCLE
npm ERR! vitic-frontend@0.0.1 e2e-jenkins: `npm-run-all -p -r server:prod:ci "protractor config/protractor.jenkins.conf.js"`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the vitic-frontend@0.0.1 e2e-jenkins script 'npm-run-all -p -r server:prod:ci "protractor config/protractor.jenkins.conf.js"'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the vitic-frontend package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR!     npm-run-all -p -r server:prod:ci "protractor config/protractor.jenkins.conf.js"
npm ERR! You can get information on how to open an issue for this project with:
npm ERR!     npm bugs vitic-frontend
npm ERR! Or if that isn't available, you can get their info via:
npm ERR!     npm owner ls vitic-frontend
npm ERR! There is likely additional logging output above.

npm ERR! Please include the following file with any support request:
npm ERR!     C:\Program Files (x86)\Jenkins\workspace\Pipeline frontend e2e\org.lhasalimited.vitic.frontend.web\npm-debug.log
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] mail
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE

You will find all need files here

selenium web driver using java query

  1. How to check the value of the drop down in selenium across value in table. For ex, Country drop down has country id, Country name it gets value from county table similarly state drop down contain state name and state id from state table. The value of state gets populated depends on the country which we specify. How to write code for checking that drop down value and database values are matched.

  2. In testng,how to find the passed and failed test cases while execution of test cases itself.

Is possible sequentially testing in golang? [duplicate]

This question already has an answer here:

I write five test file. for example

1_test.go, 2_test.go, 3_test.go, 4_test.go, 5_test.go

each test go file has each setup and teardown logic for variable enviorment. so i wanna test sequentially

But when I type "go test", I think it maybe executed pararelly.

I tried "go test -cpu 1" and "go test -cpu 1 -parallel 1", but it is not working.

anybody help me.

Android Game Application Testing Using Tools

I Planning to automate an android game application created using COCOS 2d framework and the source code is written in JavaScript and C++, which has about 1.2GB. Suggest me a tool.

Which tool is better object based or image capture based ? First I planned for Appium. But how will it recognizes object? If we use Object based tools, we have n number of objects here.. Please help me out

Testing Jenkins configuration

I've been looking on Google and stack overflow but have yet to find a solution to my question. If I want to test the integrity of the jenkins plugins and their configuration in a big automated CI is there a test suite or plugin for available? If not, what could be a possible solution to start building it myself?

Thanks!

lundi 30 janvier 2017

What should we test with unit test of Angular2 Component

I am newbie to Angular2. I wrote a small application in Angular2 which has few components. I want to write unit tests for my client application.

Someone suggested me when we write unit test for a component (written in AngularJS) then it loads the template/html as well for that component. We have to take care of UI elements inside the unit test.

My point is we should only be writing the unit test for 1. The (typescript) methods defined inside the component which has some logic or processing 2. Any service code written in the application.

We should not be bothered about the actual UI elements (html/CSS). Testing framework should not be loading the UI template (HTML/CSS) at all.

Is my understanding about unit test is correct. Please give your inputs.

Atul Sureka

Capture stdout and stderr to test Node.js CLI

Using a technique outlined in another answer I was able to write a test for the --help switch:

const expect = require('chai').expect
const exec = require('child_process').exec
const cli = './cli.js'

describe('help', function () {
  var capturedStdout
  var capturedStderr
  // http://ift.tt/1sR6i2m
  // var cmd = cli + " --help 1>&2"
  var cmd = cli + ' --help'

  before(function (done) {
    exec(cmd, function (error, stdout, stderr) {
      if (error) done(error)
      capturedStdout = stdout
      capturedStderr = stderr
      done()
    })
  })

  it('should succeed', () => {
    expect(capturedStderr).be.empty
  })

  it('should have some usage instructions', () => {
    expect(capturedStdout).to.match(/Usage: words \[options] \[pattern]/)
  })

  it('should show a sematic version number', () => {
    // http://ift.tt/2klxmvH
    expect(capturedStdout).to.match(/v\d+\.\d+\.\d+/)
  })

  it('should have some examples', () => {
    expect(capturedStdout).to.match(/Examples:/)
  })
})

There are two problems I'm having:

  1. It's 45 lines long for one switch.
  2. If I add another describe block for a different switch, for example --version, then I get the following error: Error: done() called multiple times

The solution is to move the test into another file.

Is there a better way to do what I want? All I want to do is repeatedly run my executable while testing stdout, stderr, and the exit status.

AVA testing throws

I am trying to test by throwing error.

 test('throws', t => {
    t.throws(() => { 
    valid(1) }, "Error can't put number");
 });

So the valid is function and when I put number I want to throw the error. Right now it gives me AssertionError: Missing expected exception (err)..

Not sure what I am doing wrong.

Java - Looking for a library for testing email functionality

I need to write tests for testing the email functionality written using JavaMail. Specifically I need to test authentication & secure channel (SSL/TLS). Is there a free library that provides both?

Run only tagged test in sbt custom test task

I have an sbt subproject that includes end to end tests. These are run as e2e:test. I have defined my config as I have defined a tag in the same subproject. object HealthCheckTest extends Tag("HealthCheckTest")

I am tagging some of my end to end tests with HealthCheckTest as follows:

it("should be able to verify the data", HealthCheckTest)

I want to run only the health check tests from command line. I am trying to do this via:

sbt 'project e2e' e2e:testOnly -- -n HealthCheckTest

but this leads to all of the e2e tests being run. I have tried giving the full path to the tag (com.s.p.e2etests.HealthCheckTest), but that does not work either.

Occasionally I get warnings about -- and - being deprecated; however, all documentation online says to use this syntax including scalatest docs.

How can I run just my tagged e2e tests?

I have also tried to create a separate task for health check tests but could only figure out how to filter based on test class name, not by tag.

How to configure Proguard for Android instrumentation test inside library project?

The proguard configurations is a pain when we try to configure rules for instrumentation tests that reside inside library module. There was answer to the same kinda question before, but it works only for application module not library one.

Is there anyway to enforce proguard rules to be applied for instrumentation test app that is part of library module?

apply plugin: 'com.android.library'

android {
  buildTypes {
   all {
     minifyEnabled true
     proguardFile 'proguard-rules.pro'
     testProguardFile 'test-proguard-rules.pro'
   }
  }
}

Test scenarios for state transition 0-switch and n-switch

I implemented state transition 0-switch and n-switch algorithm in C# that analysis all graph paths , but I want to test the accuracy of my implementation with heavy test scenarios as it is too hard to write them manually , so I am searching for something already implemented or websites having such thing.

Thanks in advance , Omar

Run simillar tests with Mocha

Background

I am scrapping a wikia for information, and before I actually do that, I wanna make sure I can connect to the wikia servers.

Best way to do it? Using mocha tests with my nodejs app!

Objective

I have a configuration file, which has an object with all the links I want. I have a battery set called "connection" and I want each test to try to connect to the wikia.

{
    "sources": {
        "wikia": {
            "link": "http://ift.tt/2jKTXk5",
            "pages": {
                "mods_2.0": "/Mods_2.0",
                "warframe_mods": "/Category:Warframe_Mods",
                //more links follow
            }
        }
    }
}

Problem

The problem here is that I don't want to write and replicate the same test for over a dozen wikia pages. I want to avoid repetition.

My solution to this, was to put every it inside a loop, however, the code breaks because my wikiaPages arrays is always undefined, even when I use the before() function.

Code

let assert = require("assert");
let superagent = require("superagent");
let jsonfile = require("jsonfile");

const SCRAPER_CONFIG_FILE = "./scraperConfig.json";

describe("connection", () => {

    let wikiaUri;
    let wikiaPages;
    let completeUri;

    before(() => {
        let config = jsonfile.readFileSync(SCRAPER_CONFIG_FILE);
        wikiaUri = config.sources.wikia.link;
        wikiaPages = Object.values(config.sources.wikia.pages);
    });

    for(let pageUri of wikiaPages) {

        completeUri = wikiaUri + pageUri;

        it("connects to " + completeUri, done => {
            superagent.get(completeUri, (error, res) => {
                assert.ifError(error);
                assert.equal(res.status, 200);
                done();
            });
        });
    }
});

Questions

  • How can I fix this code, so it works ?

How to make Elasticsearch indexing "blocking" when testing with Python?

I have few new microservices that use Elasticsearch to pass and retrieve data and I want to have few integration tests that use ES. The problem I have is getting data after some_document.save(). I have to have like sleep(1) for the get to retrieve the data via tested code.

Is there a way to make it blocking/synchronous to not use sleep in tests?

Mock model to use a different database in django

In my settings I defined 2 databases:

DATABASES = { "default": default_db, "test": test_db }

In my app code, I have the following:

def my_method()
    records = do_something()
    MyModel.objects.filter(
       start=truncate_minutes(utc_now()),
       country=country
    ).delete()

    MyModel.objects.bulk_create(records)
    do_something()

I want to mock the MyModel.objects, than in my test it will be using a different database like this:

MyModel.objects.using("test").bulk_create(records) or
MyModel.objects.using("test").filter(
           start=truncate_minutes(utc_now()),
           country=country
        ).delete()

How can I do it from my test method without changing the existing code?

@mock("MyModel)
def test_my_method(self, mock_my_model):
   ...

How do I test the UI of a new Activity only if certain condition is met?

Let's say I have an app with input field and a button. I create a new Activity when the user enters the correct number and presses the button. How can I test that the correct intent was fired and everything in the new Activity is in place using Espresso?

How to promote the value of testing?

The test function I work in is small, but growing in size.

We are going to be holding a 'Testing Expo' for the company in order to inform colleagues at all levels what a dedicated test function does and what value they can bring.

Many of my collegues in other areas haven't previously worked with a dedicated testers (indeed a couple didn't even realise there was a separate test team).

As a test team we want to promote how testing adds value to a project, why testers are necessary and the skills and attitudes of testers.

So, can people suggest what would demonstrate the value of testing to them?

Our current displays/presentations are: 1) Tester mindset - how testers think differently to developers 2) Test automation - testing frameworks, demostration of the framework we have built 3) How testing fits into the development cycle

It would be immensely useful to get the opinions of people on here, but could I politely ask that if you feel the need to 'downvote' this, that you leave a comment as to why. That in itself could provide us with valuable feedback.

Many thanks, Iain

Testing Expo - What would developers and other roles like to see?

The test function within my company is small, but growing.

A lot of our developers have come from backgrounds which lacked separate, dedicated test functions due to the nature of the products and services they were developing. This means that quite a number of the developers and other roles aren't used to working with testers and don't really understand what we do.

In order to address some of this, the test team are holding an 'Expo'/Showcase on what testing is and what value it brings, particularly in an agile environment.

My question to the non-test roles here is what would you as developers, etc like to see at this kind of event? What would you/have you seen which has demonstrated the value of testing and testers to you?

Thoughts welcome.

dimanche 29 janvier 2017

How to setup different test working dir per source set in gradle?

I want to add additional source set in my gradle.build and use separate test working directory for this source set. How to achieve it?

This config doesn't work, because it setups test.workingDir per project.

apply plugin: 'java'

sourceSets {
    configurationTest {
        java {
            compileClasspath += main.output + test.compileClasspath + main.compileClasspath
            runtimeClasspath += main.output + test.compileClasspath + main.compileClasspath
            srcDir file('src/configuration-test/java')
        }
    }
}


test {
    useTestNG()
    workingDir = 'src/configuration-test/working_dir'
}

AVA testing gives undefined when importing to test.js

I am using AVA as testing with node and javascript.

On test.js

import test from 'ava';
import {valid, output, input} from './dependency.js';

test("Input is not a Empty String", t => {
    t.not(input, ''); t.pass();
})

test("Correct output", t => {
    var testInput = ['KittenService: CameCaser', 'CamelCaser: '];
    var expected = 'CamelCaser, KittenService';
    var actual = output;
    t.deepEqual(actual, expected, "Result did match");
})

On first test it passes even though my

var input = '';

Also on my second test it throws:

t.deepEqual(actual, expected, "Result did match")
              |       |
              |       "CamelCaser, KittenService"
              undefined

on dependency.js

module.exports = {valid, input, output};
var input = '';
var output = [];

I do have value of output after function but it seems like on test.js it doesn't take either input or output value from dependency test. I am not exactly sure how to fix this problem.

Node Ava getting undefined when importing

I am running ava as testing and when I import certain variable to test.js it is throwing error saying that variable is undefined. On my test.js

    import {valid, input} from './dependency.js';
    test("one plus one is two", t => {
    t.deepEqual(input, output);
    })

It throws input is undefined.

In Dependency.js

    module.exports = {valid, input};
    var input = ["Test", "Hello World"];

My package.json

{
  "name": "assessment",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "ava"
  },
  "author": "",
  "license": "ISC",
  "devDependencies": {
    "ava": "^0.17.0"
  }
}

PostgreSQL performance testing - precautions?

I have some performance tests for an index structure on some data. I will be comparing 2 indexes side-by-side (still not decided if I will be using 2 VMs). I require results to be as neutral as possible of course, so I have these kinds of questions which I would appreciate any input about... How can I ensure/control what is influencing the test? For example, caching effects/order of arrival from one test to another will influence the result. How can I measure these influences? How do I create a suitable warm-up? Or what kind of statistical techniques can I use to nullify such influences (I don't think just averages is enough)?

ReferenceError: requirejs is not defined at

I am using Mocha with Zombie.js to test my application which uses require.js.

I have the following content in the test.js file:

process.env.NODE_ENV = 'test';

var express = require('express'),
  http =require('http'),
  assert = require('assert');

app = express();
app.use(express.static('src'));

const Browser = require('zombie');

describe('User visits signup page', function() {  
  before(function() {
    this.server = http.createServer(app).listen(3000);
    this.browser = new Browser({ site: 'http://localhost:3000', debug: true, runScripts: true });
  });

  before(function(done) {
    this.browser.visit('/index.html', done);
  });

  describe('submits form', function() {

    before(function() {
      return this.browser;
    });

    it('should be successful', function() {
      this.browser.assert.success();
    });

    it('should see welcome page', function() {
      this.browser.assert.text('title', 'Page Title');
    });
  });

  after(function(done) {
    this.server.close(done);
  });
});

When I am launching the test (with command line mocha test/test.js), I encouner the following error:

User visits signup page
    1) "before all" hook


  0 passing (341ms)
  1 failing

  1) User visits signup page "before all" hook:
     ReferenceError: requirejs is not defined
      at http://localhost:3000/index.html:script:2:10
      in http://localhost:3000/index.html

It seems that somehow Mocha/Zombie doesn't wait for the require.js script to get loaded or something similar. Have you encountered this issue before ? Many thanks.

Cannot inject service into Spring test

financialReportService is null that denotes it fails to inject. Test:

@RunWith(SpringRunner.class)
@SpringBootTest(classes = SnapshotfindocApp.class)
public class FindocResourceIntTest {
@Inject
    private FinancialReportService financialReportService; // todo::  null
@Before
    public void setup() {
        MockitoAnnotations.initMocks(this);
        FindocResource findocResource = new FindocResource();
        ReflectionTestUtils.setField(findocResource, "findocRepository", findocRepository);
        this.restFindocMockMvc = MockMvcBuilders.standaloneSetup(findocResource)
            .setCustomArgumentResolvers(pageableArgumentResolver)
            .setMessageConverters(jacksonMessageConverter).build();
    }


@Test
    @Transactional
    public void getFinancialRecords() throws Exception {

        // Get all the financial-reports
        restFindocMockMvc.perform(get("/api/financial-reports"))
            .andExpect(status().isOk());
        List<Findoc> finReports = financialReportService.getFinancialReports();
        for (Findoc fr : finReports) {
            assertThat(fr.getNo_months()).isBetween(12, 18);
            LocalDate documentTimeSpanLimit = LocalDate.now().minusMonths(18);
            assertThat(fr.getFinancial_date()).isAfterOrEqualTo(documentTimeSpanLimit);
        }
    }

The service:

@Service
@Transactional
public class FinancialReportService {

    private final Logger log = LoggerFactory.getLogger(FinancialReportService.class);

    @Inject
    private FinancialReportDAO financialReportDAO;

    public List<Findoc> getFinancialReports(){
        return financialReportDAO.getFinancialReports();
    }

}

samedi 28 janvier 2017

$compile: ctreq error not been thrown after upgrade to angular 1.61

I had just started the migration from angular 1.48 -> 1.61 I had left with a unit test which refuses to pass. We want to verify that the directive we wrote, will not change in the future such that it will always remain with the require attribute.

Here is the directive definition:

angular.module('ourApp')
    .directive('ourdirective', ['$timeout', function($timeout) {
    return {
      restrict: 'AE',
       require: 'ngModel',
       scope: {
         options:'=?',
         max:'=',
         ngModel : "="
    },
    templateUrl: 'ourhtmltemplate.html'
    link: function(scope, elt, attrs, ctrl) { // some code here }

And here is the unit test which we wrote for that one. You can assume that there are no compilation error of any kind, and that other tests are flying high.

it('should throw if no ng model present', function() {
  expect(function() {
    buildElement("<ourdirective ></ourdirective>");
  }).toThrowError();
};

function buildElement(html) {
  element = angular.element(html);
  $compile(element)($rootScope);
  $rootScope.$digest();
  $rootScope.select = {}
  isolatedScope = element.isolateScope();
  $rootScope.$apply(function() {});
}

We are using the following packages:

"angular": "1.6.1", "angular-mocks": "1.6.1", "jasmine-expect": "1.22", "karma": "^0.12.31", "karma-jasmine": "^0.3.5", "karma-jasmine-matchers": "^0.1.3", "karma-phantomjs-launcher": "^0.1.4",

any help would be most apritiated.

How to prevent spring boot from auto creating instance of bean 'entityManagerFactory' at startup?

I am working on a Spring boot application that uses Spring JPA with PostgreSQL. I am using @SpringBootTest(classes = <my package>.Application.class) to initialize my unit test for a controller class. The problem is that this is causing the entityManagerFactory bean (and many other objects related to jpa, datasource, jdbc, etc.) to be created which is not needed for unit tests. Is there a way to prevent Spring from automatically creating these objects till they are actually used the first time? I spent a lot of time trying to load up only the beans I need for my unit test but ran into many errors. I am relatively new to Spring and I am hoping someone else has run into this before...and can help. I can post code snippets if needed.

How to iterate thorugh a class in selerium web driver through nodejs?

My mocha tester runs on a selenium driver, in a nodeJS environment. I'm trying to iterate through a dynamically created class elements. Something like this:

for (let result of $('.results').value ) { result.getText; }

problem is, $('.results').value is an array or elements which are structured like this: {ELEMENT: 0} and can't be usable for additional data. Any suggestions?

How do you convert a mocha unit test into a mocha testcheck test?

I may be misinformed in how this all works, but from what I understand, when you create a mocha testcheck test, you need to enter your information in a completely different syntax then when you do when you make a straight up mocha test.

The mocha test I have right now looks something like this:

{ should: 'return true bold', criteria: {rules: [{label: '<b>word</b>'}]}, attempt: {answer: '<b>word</b>'}, expected: true },

I'm looking at the example that is in the mocha-testcheck documentation, and the example looks like this:

describe('MySpec', function () {

  check.it('accepts an int and a string', [gen.int, gen.string], function (x, y) {
    assert(typeof x === 'number');
    assert(typeof y === 'string');
  });

});

is there any way to easily convert what I have into what the second example indicates?

Thanks in advance!!!

Capacity test on Apache WebServer

I was trying to do a capacity test on an apache web server, but there are some result I can't understand: according to the theory part of capacity planning, I should see three different regions on the plot of throughput in/out.

  1. In the first region the expected result is the line y=x, meaning that the web server can follow my requests and reply to all with the code 200-OK (Thus, the throughput I request is equal to the throughput I get).
  2. In the second region the expected result is the line y=k, where k is that throughput that indicates the saturation of the web server (Thus, the throughput I get can't go further k).
  3. In the third region the expected result is a curve that goes from k to zero, that shows the degradation of web server, which for memory or CPU leaks starts to reject requests.

I have tried to replicate the experiment with a Virtual Machine running an instance of Apache as a server and the Physical Machine running an instance of Apache JMeter as a client. The result that I get is only the first two points, but also if I request a very very huge number of samples/seconds as throughput, I always get the saturation value.

Why I can't get the server going down, even if the CPU is 0% idle and the remaining memory is about 10MB? Or maybe is this the correct behavior and my hypothesis was incorrect? Thank you in advance.

vendredi 27 janvier 2017

Generate dynamic tests based on a parameter in Nightwatch

I'm using NightwatchJS to automate the test on our reporting website.

This is my actual code:

module.exports = {
  '@tags': ['Report 1','base','full'],
  'Report 1' : function (browser) {
    checkAnalisi(browser, 'report1', 1, 2015, '767.507')
  }
};

function checkAnalisi(browser, nomeAnalisi, scheda, year, risultatoAtteso){
  return browser
      .url('http://ift.tt/2kCujzy' + nomeAnalisi)
      .waitForElementVisible('body', 5000)
      .selectScheda(scheda-1) //Seleziona la scheda (0-based)
      .selectPromptValue('Year', year)
      .selectRappresentazione('Table')
      .waitForElementVisible('table', 5000, true)
      .assert.containsText('table tr:last-child td:last-child', risultatoAtteso)
      .end();
}

I made some helper commands to select different things in the page:

.selectScheda(scheda-1) .selectPromptValue('Year', year) .selectRappresentazione('Table')

selectPromptValue wants a prompt name and the value to set it in the page.

For now the function only sets the year parameter but in my reports I also have different parameters.

What I want to do is to pass an object to the checkAnalisi function to dynamically generate test. For example if I want to set different prompt values I want to pass something like [['Year', 2015],['Another prompt','another value']] and the checkAnalisi function should add 2 .selectPromptValue steps with the respective values.

Is it possible to cycle an input array in my function to add more steps?

How to mock a method which being called from render method using shallow (enzyme)

I need to Mock getZvalue so , when i do a shallow based on z value i will try to render something different . How do i test it . Below is sample code . can i spy this method to return a value

  class AbcComponent extends React.Component{
    render(){
    const z= this.getZValue();
    return <div>{z}</div>

    }

    getZValue(){
    //some calculations 
    }
    }


    describe('AbcComponent',()=>{

    it('Test AbcComponent',()=>{

    const wrapper= shallow<AbcComponent/>


    )


    })

How can I share setup and teardown methods across packages when testing Go?

Let's say I have two packages foo and bar. Each package has file and a test file:

foo
---widget.go
---widget_test.go
bar
---wingding.go
---wingding_test.go

Now for both tests (widget_test.go and wingding_test.go) I want to share some setup code. I know I can put this code inside each package inside main_test.go. But I obviously don't want to copy/paste code in two places. So where can I put this code so that it's shared across packages?

PHPUnit autoloader classes with composer

My project's structure is:

--/
--src
--tests
--phpunit.xml
--composer.json

I want to use composer for autoloading my classes from src folder in tests. My composer.json:

{
"name": "codewars/pack",
"description": "Codewars project",
"type": "project",
"require": {
    "fxp/composer-asset-plugin": "^1.2.0",
    "phpunit/phpunit": "5.5.*",
    "phpunit/dbunit": "2.0.*"
},
"autoload": {
    "psr-4": {"Source\\": "src/"
    }
}

}

Autoloder files generated:

<?php

// autoload_psr4.php @generated by Composer

$vendorDir = dirname(dirname(__FILE__));
$baseDir = dirname($vendorDir);

return array(
'Source\\' => array($baseDir . '/src'),
);

My phpunit.xml:

<phpunit bootstrap="vendor/autoload.php">
<testsuites>
    <testsuite name="Tests">
        <directory>tests</directory>
    </testsuite>
</testsuites>
</phpunit>

And my test file example:

class Task2Test extends PHPUnit_Framework_TestCase
{
public function testTask2(){
    $list=[1,3,5,9,11];
    $this->assertEquals(7,\Source\findMissing($list));
    $list=[1,5,7];
    $this->assertEquals(3,\Source\findMissing($list));
 }

}

And when I run tests I get error such as Fatal error: Call to undefined function Source\findMissing()

Please, help me, how can I solve this problem?

_this.store.getState is not a function when testing react component with enzyme and mocha

Im trying to test a React component with enzyme and mocha as follows

import { mount, shallow } from 'enzyme';
import React from 'react';
import chai, { expect } from 'chai'
import chaiEnzyme from 'chai-enzyme'
import sinon from 'sinon'

import MyComponent from 'myComponent'

chai.user(chaiEnzyme())
describe('MyComponent', () => {
  const store = {
    id: 1
  }
  it ('renders', () => {
    const wrapper = mount(<MyComponent />, {context: {store: store}})
  })
})

haven't actually written the test as it fails at the declaration of wrapper

Error message: TypeError: _this.store.getState is not a function

No idea what the problem is and cant find anything addressing this!

Any help would be great!

Alternatives to Selenium

I've been having a lot of trouble with Selenium. Does anyone have an alternative? I'm very new to all this(Web Dev, testing, etc.), i'm looking for something that i could record scripts and run it in Jenkins. I wonder what people are using in this matter.

Testing asynchronous React state changes

I have a React component for an onboarding process for my app. It has 5 consecutive screens, and I want to write a test that starts at the first screen and verifies that it can indeed click through all 5 screens.

I'm using react-router to handle state changes, and I'm guessing a lot of what it does is probably asynchronous because just doing a TestUtils.Simulate.Click is insufficient to trigger a DOM update.

I tried

click();
return Promise.resolve().then(() => {
  // check for screen here
});

but this doesn't work either, and only setTimeout + done seems to work for me, so I ended up with this monstrosity, where actions is an array of functions that in turn, click the 'next' button on my onboarding screen, and verify that they're at the right screen.

const chain = (actions: Function[]) => {
    Array(actions.length).fill(0).forEach((_, i) => setTimeout(actions[i], 20 * (i + 1)));
    setTimeout(done, (actions.length + 1) * 20);
};

is there a better way of doing this?

Getting Started with appium and Testobject

i am getting started now with Testobject and want to use appium for it. i tried the setup as displayed on Testobject page, but i do not know how to start my test and what i may missed. i installed:

npm install appium -g

here is my config:

exports.config = {
    protocol: 'https',
    host: 'app.testobject.com',
    port: '443',
    path: '/api/appium/wd/hub',


    capabilities: [{
        testobject_api_key: 'myKey',
        testobject_device: 'LG_Nexus_4_E960_real',
        browserName: 'Chrome'
    }],

    specs: [
        '.Testspec.js'
    ],

    sync: true,
    logLevel: 'verbose',
    coloredLogs: true,
    screenshotPath: './errorShots/',
    waitforTimeout: 10000,
    connectionRetryTimeout: 10 * 60000,
    connectionRetryCount: 3,
    framework: 'mocha', //do i need this? what is needed to install?
    mochaOpts: {
        ui: 'bdd',
        enableTimeouts: false
    }
};

and here is my spec:

describe('TestObject website', function() {
    before(function() {
        browser.timeouts('implicit', 10000);
        browser.url('https://testobject.com');
    });

    it('Opens features page', function() {
        var learnMore = "//a[contains(text(), 'Learn More')]";
        browser.scroll(learnMore);
        browser.element(learnMore).click();
        var pageUrl = browser.getUrl();
        assert.equal(pageUrl, "http://ift.tt/2jblxs2")
    });
});

Now how do i start the test? i thought of something like:

appium Config.js.

TFS Link testresult to Task

I have the next structure:

Bug

  • Dev task

  • Test Result with the failed test case

  • Tested by

For retest the bug I have created a RETEST task and link it to the bug. When I rerun the testcase I would like to add the passed test result to the RETEST task. How can I do? Or what is the best workflow?

Testing the numbers of rows created with Laravel

I'm making a functional test with laravel / Phpunit

What I expect is having 2 rows with championship_id = 123

But the content of each row may vary.

I only know how to check if a row exists :

$this->seeInDatabase('championship_settings',
            ['championship_id' => $championship->id,
            ]);

but I don't know how to check that there is 2 rows corresponding to criteria

Any idea how should I do it???

iOS UITesting How to Dismiss Popover

I have a pretty complicated app with a lot of views and popovers for fast picking entries.

I'm not able to dismiss a popover. I tried a lot like:

  • Hitting coordinates in the window
  • app.otherElements["PopoverDismissRegion"] Hitting elements behind the
  • popover (which are not hittable at all)

When I record the it in XCode I get: app.otherElements["PopoverDismissRegion"]

Which makes no sense to me.

Hope someone can help.

Thx

Infos: iOS 10.2,Xcode 8.2.1, iPad Air 2 (Device and Simulator, same results)

jeudi 26 janvier 2017

Angular2 Component test - Error: Can't resolve all parameters

I'm working on testing one of my components for the first time using Karma/Jasmine etc and have been mostly following along with the docs on testing. My component requires 3 constructor arguments;

constructor(
  private myService: MyService,
  private renderer: Renderer,
  private element: ElementRef
) { }

I have attempted to mock/stub those dependencies based on this section of the docs as follows;

// Mocks/Stubs
const myServiceStub = {};
class MockElementRef {}
class MockRenderer {}

// beforeEach block
beforeEach(() => {
  TestBed.configureTestingModule({
    declarations: [ MyComponent ],
    providers: [
      { provide: ElementRef, useClass: MockElementRef },
    { provide: Renderer, useClass: MockRenderer },
      { provide: MyService, useValue: myServiceStub},
    ]
  });

  fixture = TestBed.createComponent(MyComponent);
});

Despite this, whenever I run my tests I get the following error;

Error: Can't resolve all parameters for MyComponent: (?, ?, ?).
    at SyntaxError.ZoneAwareError (test.ts:9250:33)
    at SyntaxError.BaseError [as constructor] (test.ts:44243:16)
    at new SyntaxError (test.ts:44453:16)
    at CompileMetadataResolver._getDependenciesMetadata (test.ts:61503:31)

What am I missing here? Thank you!

Testing for isomorphic javascript application

I have a Javascript application that consists of client-side code and server-side (Node/Express) code. Is there a testing solution that covers both client and server, or do you have to run seperate test frameworks for each?

connect mysql database with adodb in phpunit?

Test file in phpunit don't connect to mysql using adodb. The code:

public function getConnection() {
    $HOST = "127.0.0.1";
    $USERNAME = "username";
    $PASSWORD = 'password';
    $DBTYPE = "mysqli";
    $dbName = "DBName";

    $this->db = ADONewConnection($DBTYPE);
    $this->db->debug = true;
    $this->db->Connect($HOST, $USERNAME, $PASSWORD, $dbName) or die("Unable to connect!");
}

The response:

127.0.0.1: Missing extension for mysql
Unable to connect!

Why is not posible the connection? What is wrong in the code? please help.

How to navigate to a page in Selenium without waiting for AJAX responses

How the page I'm trying to test works is

  1. Open page
  2. A button is supposed to be disabled
  3. An AJAX request that was sent on page load finishes
  4. The button is enabled

However, when I try to do something like

driver.Navigate().GoToUrl("https://thepage.com"); 
Assert.IsFalse(driver.FindElement(By.Id("the-button")).IsEnabled());

the problem is that the AJAX request has finished between the first and second lines and therefore I can't properly test that the button is disabled at first. Is there any way to do a Navigate().GoToUrlWithoutWaitingForAnything or any hack I could use to do this test?

Deep AND Close array equality with Chai

So in Chai .deep.equals allows one to compare arrays by value and .closeTo (and .approximately) allows one to compare floats to a specified accuracy. I'm drawing a blank on how to get it to do both though i.e. test "close" equality of an array of floats e.g.

expect([0.1,0.2,0.34]).to.beDeeplyCloseTo([0.1,0.2,0.33333333]);

Thanks!

How to adapt datasplit sizes with createDataPartition()

I have a question concerning datasplitting into train, test & validation with createDataPartition(). I found a solution that fits perfectly for a 60, 20, 20 split. However, I don't see a way to adapt my data splitting with it and still ensure that my data is not overlapping. I.e., I would like to split into 80, 10, 10 or whatever.

# Draw a random, stratified sample including p percent of the data    
idx.train <- createDataPartition(y = iris$Species, p = 0.8, list = FALSE) 
# training set with p = 0.8
train <- iris[idx.train, ] 
# test set with p = 0.2 (drop all observations with train indeces)
test <-  iris[-idx.train, ] 
# Draw a random, stratified sample of ratio p of the data
idx.validation <- createDataPartition(y = train$Species, p = 0.25, list = FALSE) 
#validation set with p = 0.8*0.25 = 0.2
validation <- train[idx.validation, ] 
#final train set with p= 0.8*0.75 = 0.6
train60 <- train[-idx.validation, ] 

[Cucumber][JVM] How can I ise Page objects

Please, I need help i m beginner in cucumber jvm, can anyone explain to me how can I work with page objects?

I don't know how can I organise my project?

thank you in advance.

Yours sincerly.

How to use Selenium with "chart.js"

I've been asked to use Selenium to write some tests for a website. Several of the pages have graphs on them that are generated by the "chart.js" library. The tests require me to:

  • Read the size of some of the data values in the chart
  • Click on certain bars on the chart.
  • Hover over certain bars and validate the tool tips

The trouble is the chart is implemented as a single HTML canvas element, so there is no DOM for the details of the chart that selenium can manipulate.

CppUnit integration into TFS 2015

Is there a way to integrate CppUnit test in TFS 2015? Are there any adapters available?

I am thinking also to take the results from CppUnit (.xml or .log) and publish them to build Tests tab. Is this possible?

Adding test details to a VSTS / TFS test summary

I'm running automated Selenium tests and need to record some of the test details (username that was created, password etc) - how do I add these details to a VSTS / TFS test summary?

It seems like the Details section would be the ideal place, but I can't find a way to add data in there... I looked at using TestContext, but that didn't seem to provide this functionality.

VSTS test summary

PHPUnit doesn't see files from other folder

I have such structure of my project: Root/src Root/tests

In tests folder I write my tests. When all includes files are in Root folder all work fine, but when I want to put my files in src I get error:

Warning: require(../src/Task1.php): failed to open stream: No such file or    directory in E:\xampp\htdocs\CodewarsPHP\tests\Task1Test.php on line 2

Fatal error: require(): Failed opening required '../src/Task1.php'    (include_path='E:\xampp\php\PEAR') in    E:\xampp\htdocs\CodewarsPHP\tests\Task1Test.php on 2 line

My test file:

<?php
require '../src/Task1.php';
class Task1Test extends PHPUnit_Framework_TestCase
{
public function testTask1(){
    $this->assertEquals([1,1,1,3,5,9],fib([1,1,1],6));
}
}

jasmineReporters.JUnitXmlReporter doesn't generate XML Report

I have problem with JunitXML reporter. It doesn't generate xml file.
I open test by: protractor example-test.js. I haven't any errors, but file doesn't generate. Please help.

local.ts file

import { Config } from 'protractor';
  var jasmineReporters = require('jasmine-reporters');
export const ENV: Config = {
    capabilities: {
        'browserName': 'chrome',
        'version': 'ANY'
    },


    onPrepare: function() {
        jasmine.getEnv().addReporter(new jasmineReporters.JUnitXmlReporter({
            consolidateAll: true,
            savePath: '/Users/test/Desktop/test2/automatic_tests/raports',
            filePrefix: 'xmloutput'
        }));
    } 


local.ts file
import { Config } from 'protractor';

import { ENV } from './local';

export const TestConfig: Config = {
    framework: 'jasmine2',
    untrackOutstandingTimeouts: true,
    jasmineNodeOpts: {
        showColors: true
    },
    allScriptsTimeout: 20000,
    noGlobals: true,
    capabilities: ENV.capabilities,
    seleniumAddress: ENV.seleniumAddress,
    baseUrl: ENV.baseUrl,
    params: ENV.params
};


test-runner.ts
import { Config } from 'protractor';
import { TestConfig } from '../../test';

export let config: Config = TestConfig;
config.specs = ['example-test.js'];

Help please

Web aplication testing in real life

I am working on a bigger platform based on php framework.

I was asked to think about testing it, at the beginning these should be automated tests.

I am reading about the testing and it is really confusing me. People keep writing and talking about "testing", they are only "testing" but I cannot get any specific information about what are tests actually about.

Most of the articles are directing me on Selenium but what does it actually do? I can record there how I'm clicking on my registration form and loop it for no reason?

This question might sound stupid but what is actually testing about? What can I test on a huge platform like the one I am working on right now?

Maximize browser in Sencha test

How can I maximize browser window in Sencha test 2.0.0 when the test is run via WebDriver?

mercredi 25 janvier 2017

Providing a mock to another mocks constructor?

I'm curious about whether I'm doing things right here, as I'm new to testing.

I have two services (so far), AuthService and CommentService. CommentService depends on AuthService, and AuthService depends on a 3rd party class (\SlimSession\Helper).

In my unit test for AuthService I simply mock \SlimSession\Helper and provide the mock to the constructor.

Now, in my test for CommentService I mock the AuthService and provide it to the constructor of the CommentService. But I must also create a mock of the \SlimSession\Helper again to provide it to the constructor of my mocked AuthService.

$session = $this->getMockBuilder('\SlimSession\Helper')
  ->getMock();

$auth = $this->getMockBuilder('\App\Service\AuthService')
  ->setConstructorArgs([$session])
  ->getMock();

$auth->expects($this->any())->method('isLoggedIn')->willReturn(false);
$auth->expects($this->any())->method('getLoggedInUserId')->willReturn(0);

Is this right? It seems a bit silly to have to provide a (mock) dependency to a mock object, since it won't use that dependency anyway.

My question regards PHPUnit in particular, but I guess it could be made generally for unit testing.

Using adb shell command to test apps, how to make it continue even one of the test fails?

I am testing an Android app using command like:

adb shell am instrument -w -e class net.mandaria.test.TippyTipperTest,net.mandaria.test.TippyTipperTest2,net.mandaria.test.TippyTipperTest3 http://ift.tt/2klT1kd

However, when one of the tests fails, the entire test execution stops. For example, if the first test "net.mandaria.test.TippyTipperTest" fails, I got this output:

net.mandaria.test.TippyTipperTest:INSTRUMENTATION_RESULT: shortMsg=junit.framework.AssertionFailedError
INSTRUMENTATION_RESULT: longMsg=junit.framework.AssertionFailedError: shows enter
INSTRUMENTATION_CODE: 0

My question is: how can I make it continue to run all the tests, even if the first one fails?

Should configuration files be part of unit tests?

I believe this question to be general in nature, even though I mention the Spring framework here.

I've been working with the Spring framework, and one of the things I've seen a lot of is code like this:

The spring framework will map these values in a config.yaml file...

server:
    url: http://someUrl
    port: 8080
    user: user
    password: secret

to the following class...

@ConfigurationProperties("server")
public class ServerConfig {
    private String url;
    private Integer port;
    private String user;
    private String password;
    //getters, setters...
}

My point is, is that objects get instantiated at run time based on a configuration file.

My question is, when unit testing, should I incorporate configuration files in my tests, or should all setup and configuration happen in code?

I seem to think that unit tests should be deterministic and not rely on dependencies like a configuration file for setup. But if the configuration file is part of the tests, then maybe that's OK?

org.openqa.selenium.WebDriverException: Tried to run command without establishing a connection

I am trying to run a simple login test on Firefox and Chrome browsers in parallel using the parallel="tests" attribute in my testNG.xml file. At any given time, the tests are being run on only one browser, either Firefox or Chrome randomly, but not in both. Below is the TestNG trace. Please let me know how to solve this issue.

org.openqa.selenium.WebDriverException: Tried to run command without establishing a connection Build info: version: 'unknown', revision: '1969d75', time: '2016-10-18 09:43:45 -0700' System info: host: 'Lavanyas-MacBook-Pro.local', ip: '192.168.1.12', os.name: 'Mac OS X', os.arch: 'x86_64', os.version: '10.11.6', java.version: '1.8.0_102' Driver info: org.openqa.selenium.firefox.FirefoxDriver Capabilities [{rotatable=false, raisesAccessibilityExceptions=false, marionette=true, firefoxOptions={args=[], prefs={}}, appBuildId=20170118123726, version=, platform=MAC, proxy={}, command_id=1, specificationLevel=0, acceptSslCerts=false, processId=25557, browserVersion=51.0, platformVersion=15.6.0, XULappId={ec8030f7-c20a-464f-9b0e-13a3a9e97384}, browserName=firefox, takesScreenshot=true, takesElementScreenshot=true, platformName=darwin}] Session ID: 0167b025-96f1-0143-b03d-742695fc50e0 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.openqa.selenium.remote.http.W3CHttpResponseCodec.createException(W3CHttpResponseCodec.java:127) at org.openqa.selenium.remote.http.W3CHttpResponseCodec.decode(W3CHttpResponseCodec.java:93) at org.openqa.selenium.remote.http.W3CHttpResponseCodec.decode(W3CHttpResponseCodec.java:42) at org.openqa.selenium.remote.HttpCommandExecutor.execute(HttpCommandExecutor.java:163) at org.openqa.selenium.remote.service.DriverCommandExecutor.execute(DriverCommandExecutor.java:82) at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:601) at org.openqa.selenium.remote.RemoteWebElement.execute(RemoteWebElement.java:274) at org.openqa.selenium.remote.RemoteWebElement.sendKeys(RemoteWebElement.java:98) at automationFramework.TestngParameters.test(TestngParameters.java:47) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:104) at org.testng.internal.Invoker.invokeMethod(Invoker.java:645) at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:851) at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1177) at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:129) at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:112) at org.testng.TestRunner.privateRun(TestRunner.java:756) at org.testng.TestRunner.run(TestRunner.java:610) at org.testng.SuiteRunner.runTest(SuiteRunner.java:387) at org.testng.SuiteRunner.access$000(SuiteRunner.java:39) at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:421) at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

This is my testNG.xml file:

<!DOCTYPE suite SYSTEM "http://ift.tt/19x2mI9" >

<test name="FirefoxTest">
    <parameter name="browser" value="firefox" />
    <parameter name="sUsername" value="ldpb1@gmail.com" />
    <parameter name="sPassword" value="testpass" />
    <classes>
        <class name="automationFramework.TestngParameters" />
    </classes>
</test>

<test name="ChromeTest">
    <parameter name="browser" value="chrome" />
    <parameter name="sUsername" value="ldpb6@gmail.com" />
    <parameter name="sPassword" value="testpass" />
    <classes>
        <class name="automationFramework.TestngParameters" />
    </classes>
</test> 

When a defect should be logged in agile ? when found or at the end of sprint?

Could someone give some feedback on the fact that what is the right time to log a defect in a defect tracking system in agile .

If we log it right when we find a issue , it might flood the issue tracking system with a lot of issues . On the other hand , if not logged , it is very hard to track the issues .

Please give me some context on how you use that in your team and what are the best practices.

Thanks

Write a test to ensure Perl script loads a particular module

I'm writing a testing script using CPAN's [Test][1] module. I'd like the script to test to see if my program loads the URI::URL package. Is this possible?

Meteor test only runs one test suite

This is part of my project directory.

enter image description here

Unfortunately, for some strange reasons, all but one tests are executed, the one contained in the file methods.transformations.test.js.

Everything seemed fine this morning, then I ran into some issues and fixed them, and when I looked back at the test results, all my tests are no longer executed. All the files are loaded, and if I put a console.log inside every describe, I see things being echo'ed, but no it(...) is executed.

I'm not sure what kind of information I can provide to help resolve this issue. Can someone help?

I'm using

practicalmeteor:chai@2.1.0_1
practicalmeteor:mocha@2.4.5_6
practicalmeteor:mocha-core@1.0.1

TDD: why might it be wrong to let app code know it is being tested, not run?

In this thread, Brian (the only answerer) says "Your code should be written in such a fashion that it is testing-agnostic"

The single comment says "Your code should definitely not branch on a global "am I being tested flag".".

But neither gives reasons, and I would really like to hear some rational thoughts on the matter. It would be immensely easy (particularly given the fact that a lot of tests have package-private access to the app classes) to reach into a given app class and set a boolean to say "this is a test, not a run".

All sorts of things which I find myself jumping through hoops (injected mocked private fields, etc.) to achieve could become easier to accomplish.

It's also obvious that if you took this too far it could be disastrous... but as one tool among many in the software testing armoury why does the concept meet with such opprobrium?

Androd Espresso - Test flow with fragments

I'm developing an application with Fragment Navigation Pattern, I want to write some test with Espresso, but It's difficult to write with this pattern(well in my case). In my view, because a lot of tutorials and articles hand the flow with activities and Espresso recommend it. Do you have any suggestion or advice?

Proguard not shrinking test APK

I'm using Proguard to shrink my debug apk and test apk

buildTypes {
    debug {
        applicationIdSuffix ".debug"
        debuggable true
        signingConfig signingConfigs.debug
        minifyEnabled true
        proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
        testProguardFile 'proguard-test-rules.pro'
    }

When I enable minify and run integration tests, the debug apk method count reduces, but not the test apk.

I know that Proguard is doing something, because if I don't have the right rules in proguard-test-rules.pro I'll see warnings and the test apk won't compile.

So what's happening? Why isn't my test apk shrinking? Just for reference, here are my .pro files:

proguard-rules.pro:

# general
-dontobfuscate

# for Retrofit2
-dontwarn retrofit2.**
-keep class retrofit2.** { *; }
-keepattributes Signature
-keepattributes Exceptions

# for RetroLambda
-dontwarn java.lang.invoke.*

# for Saripaar
-keep class com.mobsandgeeks.saripaar.** {*;}
-keep @com.mobsandgeeks.saripaar.annotation.ValidateUsing class * {*;}

# for OKIO
-dontwarn okio.**

# for RxJava
-dontwarn sun.misc.Unsafe

# for android.content.res classes
-dontwarn org.xmlpull.v1.**

# for Butterknife
-dontwarn rx.functions.Func1

proguard-test-rules.pro

-include proguard-rules.pro

-dontobfuscate
-dontwarn

-dontwarn org.hamcrest.**
-dontwarn android.test.**

-dontwarn android.support.test.**
-keep class android.support.test.** { *; }

-keep class junit.runner.** { *; }
-keep class junit.framework.** { *; }
-keep class org.jmock.core.** { *; }
-keep class org.easymock.** { *; }


-dontwarn com.fasterxml.jackson.databind.**
-dontwarn com.fasterxml.jackson.core.**
-dontwarn com.fasterxml.jackson.annotation.**
-dontwarn org.ietf.jgss.**
-dontwarn javax.xml.**
-dontwarn javax.swing.**
-dontwarn javax.lang.**
-dontwarn java.nio.**
-dontwarn java.lang.**
-dontwarn org.w3c.dom.traversal.**
-dontwarn org.eclipse.jetty.**
-dontwarn java.beans.**
-dontwarn org.slf4j.**
-dontwarn org.apache.http.**

CompletableFuture usability and unit test

I'm learning about java 8 CompletableFuture and ended up with this.

Fist of all, what do you thing about this lines of code? I need to send request to different service in parallel and then wait for all of them to response and continue working.

//service A
CompletableFuture<ServiceAResponse> serviceAFuture = CompletableFuture.supplyAsync(() -> this.ServiceA.retrieve(serviceARequest), serviceAExecutorService);

//service B
CompletableFuture<ServiceBResponse> serviceBFuture = CompletableFuture.supplyAsync(() -> this.ServiceB.retrieve(serviceBRequest), serviceBExecutorService);

CompletableFuture.allOf(serviceAFuture, serviceBFuture).join();
ServiceAResponse responseA = serviceAFuture.join();
ServiceBResponse responseB = serviceBFuture.join();

And even the code is doing what I want, I'm having problems testing the class where that code is. I tried using Mockito and do something like:

doAnswer(invocation -> CompletableFuture.completedFuture(this.serviceAResponse)).when(this.serviceAExecutorService).execute(any());

Where executor services and services responses are mocks but the test never ends and the thread keeps waiting for something in this line

CompletableFuture.allOf(serviceAFuture, serviceBFuture).join();

Any hint on what I'm missing here? Thank you!

How to test computed properties in Vue.js? Can't mock "data"

I wonder how to test computed properties in Vue.js's unit tests.

I have create a new project via vue-cli (webpack based).

For example here are my Component:

<script>
  export default {
    data () {
      return {
        source: []
      }
    },
    methods: {
      removeDuplicates (arr) {
        return [...new Set(arr)]
      }
    },
    computed: {
      types () {
        return this.removeDuplicates(this.source))
      }
    }
  }
</script>

I've tried to test it like this

it('should remove duplicates from array', () => { const arr = [1,2,1,2,3] const result = FiltersList.computed.removeDuplicates(arr) const expectedLength = 3

  expect(result).to.have.length(expectedLength)
})


QUESTION (two problems):

  1. this.source is undefined. How to mock or set value to it? (FiltersList.data is a function);
  2. Perhaps I don't wan't to call removeDuplicates method, but how to mock(stub) this call?

POSTMAN: Get Generated Request in test to compare to Response

I am using some of the auto generated parameters in my request body in a postman request(i.e: ).

I would like in my test to retrieve the request that was sent to the server to compare what this variable value was, and what the response parroted back to me me in my request.

for example, my request's body looks like this:

{
 "Description": "testing this "
}

and I would in the tests be able to do:

var request = JSON.parse(requestBody);
var response = JSON.parse(responseBody);
test[description should match] = request.Description === response.Description;

is this doable?

What to do after starting a rails server on localhost:3000?

I am new to rails. I have followed instructions and setup a rails server. I have an application on my computer. I have started localhost:3000 and I don't know what to do next step.

I did try a few other solutions, but nothing explains the next step.

Thank you

How to simulate time passing in react in enzyme/mocha component tests

I have component which changes its state after some time. I'm using

setTimeout(function(){setState(), someTime)

to alter state.

Is it possible to mock passing someTime?

Can you have automated regression/integration tests for Azure Logic Apps?

Can you have automated regression/integration tests for Azure Logic Apps?

And if you can, how? ... especially in the context of CI/CD builds and deployments

... and if you can't, why not!!

Is it possible to record macros under IBM Rational DOORS?

Does anyone know whether it is possible te record macros under IBM Rational DOORS? The needed macro shall implement an algorithm to systematically add test case refrences to test cases based on the linked requirement in the specification.

Do I need a special DOORS Pug-In, in case yes?

In which programming/scripting language would be the automatically generated macro, in case yes? Would I have the possibility to manually change or adapt the source code before calling or executing the macro?

How do I mock a base method in the Controller class using the NSubstitue framework

I need a mock a method present in a base class when an Action method in the Controller class invoke it.

Here is my Controller class below, the action method Index() calls the base method GetNameNodeStatus(). Now how can I mock the GetNameNodeStatus() present in the base class when the action method Index calls it using the Nsubstitute mocking frameworks.

using Cluster.Manager.Helper;
using Cluster.Manager.Messages;
using System;
using System.Collections.Generic;
using System.Globalization;
using System.IO;
using System.Linq;
using System.Net;
using System.Web;
using System.Web.Mvc;

namespace Cluster.Manager
{
    public class HomeController : Controller
    {
        // GET: Home
        public ActionResult Index()
        {
            ClusterMonitoring monitoring = new ClusterMonitoring();
            string getStatus = monitoring.GetNameNodeStatus("", new Credential());
            return View();
        }
     }
}

Here is my base class Clustermonitoring

namespace Cluster.Manager.Helper
{
    public class ClusterMonitoring
    {
        public virtual string GetNameNodeStatus(string hostName, Credential credential)
        {
            return "Method Not Implemented";
        }
    }
}

And here is my Test class

namespace NSubstituteControllerSupport
{
    [TestFixture]
    public class UnitTest1
    {

        [Test]
        public void ValidateNameNodeStatus()
        {
            var validation = Substitute.ForPartsOf<ClusterMonitoring>();
            validation.When(actionMethod => actionMethod.GetNameNodeStatus(Arg.Any<string>(), Arg.Any<Credential>())).DoNotCallBase();
            validation.GetNameNodeStatus("ipaddress", new Credential()).Returns("active");
            var controllers = Substitute.For<HomeController>();
            controllers.Index();
        }
    }
}

How to test a modul with two dependencies and a factory method in Jasmine

Maybe this question is answered in other questions, but i don't understand how it works with dependencies :/

I have watch several tutorials and can't find the solution :/

I have a module which has 2 dependencies and factory method.

How can I test the factory method ?

This is what I got so far :

test :

describe("Test all services from the home module", function() {

    describe("when I call factory method: GetResultFile", function() {
        //declare variable for dependencies on module/factory method
        var GetResultFile;
        var mockNgNaifBase64;
        var mockNgFileSaver;

        beforeEach(module('Home'));

        beforeEach(module('naif.base64'));

        beforeEach(inject(function(_naif.base64_) {
            mockNgNaifBase64 = _naif.base64_;
        }));

        beforeEach(module('ngFileSaver'));

        beforeEach(inject(function(_ngFileSaver_) {
            mockNgFileSaver = _ngFileSaver_;
        }));

        beforeEach(inject(function() {
            var $injector = angular.injector(['Home']);
            GetResultFile = $injector.get('GetResultFile');
        }));

        it("when I call factory method: GetResultFile.arrayCheck", function() {
            var statusInArray = 'SETUP';
            var statusNotInArray = 'DONE';
            var statusList = [{'id': 1, 'status': 'SETUP'}, {'id': 2, 'status': 'TEST'}];

            expect(GetResultFile.arrayCheck(statusInArray, statusList)).not.toEqual(-1);
            expect(GetResultFile.arrayCheck(statusNotInArray, statusList)).toEqual(-1);
        })  
    })

});

my module is this : angular.module('Home', ['naif.base64', 'ngFileSaver']);

and my factory :

.factory('GetResultFile',
    ['$http', '$q',
    function ($http, $q) {

        var service = {};

        //service function for checking, if item is already in the array
        service.arrayCheck = function(status, items) {

            var i = 0;
            var len = items.length;

            for (i = 0; i < len; i++) {
                //if item is already in array: delete item
                if(status === items[i].status) {
                    return i;
                }
            }
            return -1;
        }
        return service;
}])

mardi 24 janvier 2017

Protractor testing for an selected option

I am writing tests with Protractor and Jasmine for an Angular v1 project.

I have to find a way to test if an option for a certain user role is selected.

HTML:

<select ng-model="user.groups" class="form-control" multiple ng-options="role for role in groups">
</select>

For my test, I know that the user needs to have a certain role and I need now confirm, that this one is selected.

What I have so far:

test-spec.js

it('should have the "admin" role checked', function () {
   teamSettingsModul.checkUserRole(userData.adminUser, 'admin');
});

team-settings-modul.js

exports.checkUserRole = function(user, role) {
     teamSettingsPage.openEditTeamMember(user);
     teamSettingsPage.roleSelected(role);
};

team-settings-page.js

this.roleSelected = function(roleName) {
    let select = element(by.model('user.groups')),
        option = select.$('[value=string:"'+roleName+'"]');
    expect(option.getText()).toBe(roleName);
    expect(option.getAttribute('selected')).toBe('selected');
}

Of course the last expect does not work and I get this back when running the test

Source - Team Settings Page Edit Form should have the "admin" role checked
Message:
Failed: invalid selector: An invalid or illegal selector was specified
  (Session info: chrome=55.0.2883.95)
  (Driver info: chromedriver=2.26.436421 (6c1a3ab469ad86fd49c8d97ede4a6b96a49ca5f6),platform=Mac OS X 10.11.6 x86_64) (WARNING: The server did not provide any stacktrace information)
Command duration or timeout: 16 milliseconds
For documentation on this error, please visit: http://ift.tt/1F7UoFL
Build info: version: '2.53.1', revision: 'a36b8b1', time: '2016-06-30 17:37:03'
System info: host: 'Danielas-MacBook-Pro.local', ip: 'myIP', os.name: 'Mac OS X', os.arch: 'x86_64', os.version: '10.11.6', java.version: '1.8.0_112'
Driver info: org.openqa.selenium.chrome.ChromeDriver
Capabilities [{applicationCacheEnabled=false, rotatable=false, mobileEmulationEnabled=false, networkConnectionEnabled=false, chrome={chromedriverVersion=2.26.436421 (6c1a3ab469ad86fd49c8d97ede4a6b96a49ca5f6), userDataDir=/var/folders/8r/pllnd7_n0wd4nqptkmn3z3rm0000gn/T/.org.chromium.Chromium.G8gHx8}, takesHeapSnapshot=true, pageLoadStrategy=normal, databaseEnabled=false, handlesAlerts=true, hasTouchScreen=false, version=55.0.2883.95, platform=MAC, browserConnectionEnabled=false, nativeEvents=true, acceptSslCerts=true, locationContextEnabled=true, webStorageEnabled=true, browserName=chrome, takesScreenshot=true, javascriptEnabled=true, cssSelectorsEnabled=true, unexpectedAlertBehaviour=}]
Session ID: b03dc6b33eb855957c674a6dc3024033
*** Element info: {Using=css selector, value=[value=string:"admin"]}

Is there a way to test it? I have the feeling I am missing something here.

Jasmine has no access to file but Angular2 has?

Maybe my question is stupid but "my" Jasmine can't get access to:

project_dir/src/data.json

But angular has (I have directly access via http://localhost:4200/tabs.json and in program) In tests console (runned with 'ng test' with default configuration) I got error:

Chrome 55.0.2883 (Windows 10 0.0.0) ERROR Uncaught Response with status: 404 Not Found for URL: http://localhost:9876/tabs.json at webpack:///~/zone.js/dist/zone.js:155:0 <- src/test.ts:91060 Chrome 55.0.2883 (Windows 10 0.0.0): Executed 0 of 1 ERROR (0.445 secs / 0 secs)

Maybe there is something wrong with my spec's? I believe this is only a config matter.. I tried many dirs and "dots" in link to this file (./../ etc) but nothing works.

Selenium WebDriver and Session0 isolation mode restriction on Windows Service

I have my selenium tests running fine locally against the IE webdriver, but when trying to run these tests on our Jenkins server (running as Windows service), they fail and the errors are around elements not being found. Could it be that the Session0 isolation restriction on Windows services is causing this? How do I get around this issue to allow my e2e tests to pass on Jenkins.

Looking forward to and appreciate any help concerning this matter.

Thank you.

How to control time for capybara / phantomjs tests

I want to test that some deadlines are getting displayed to users correctly in different timezones and at different times of day. My tests are using capybara+rspec+phantomjs.

I am passing a block to Timecop.travel(datetime) and the code in the test within that block is getting the mocked datetime correctly, but it looks like PhantomJS / the mocked browser are not getting the mocked time.

Is there any known way to get PhantomJS to work with Timecop? Or other ways to mock out or manipulate time for testing purposes?

Here's a simple example to illustrate what I mean.

time_spec.rb:

it "should show the Time travel date" do
  # current date is 2017-01-24
  Date.today.should == Date.parse("2017-01-24")
  Timecop.travel( Time.parse("2001-01-01 01:01") ) {
    sign_in(user)
    visit "/#{user.username}"

    Date.today.should == Date.parse("2001-01-01")
    page.should have_text("Today is 2001-01-01")
    page.should have_text("Javascript says 2001-01-01")
  }
end

user.html.erb:

<p>Today is <%= Time.now.iso8601 %></p>
<script>
  var now = moment().format()
  $('p').append("<p>Javascript says "+now+"</p>")
</script>

output of running the test:

Failures:

  1) Dashboard should show the time travel date
     Failure/Error: page.should have_text("Javascript says 2001-01-01")
       expected to find text "Javascript says 2001-01-01" in "Today is 2001-01-01T01:01:00-08:00 Javascript says 2017-01-24T12:36:02-08:00"
     # ./spec/features/time_spec.rb:67:in `block (3 levels) in <top (required)>'
     # /gems/ruby-2.2.0/gems/timecop-0.8.0/lib/timecop/timecop.rb:147:in `travel'
     # /gems/ruby-2.2.0/gems/timecop-0.8.0/lib/timecop/timecop.rb:121:in `send_travel'
     # /gems/ruby-2.2.0/gems/timecop-0.8.0/lib/timecop/timecop.rb:62:in `travel'
     # ./spec/features/time_spec.rb:59:in `block (2 levels) in <top (required)>'

how can I select and priory my test cases by open source tools?

Hi I need an open source tool for regression test cases selection and prioritization. please suggest me many tools. thank you

I'm used this just for some tests

This question is just for tests. I want to test something with 2 URLs from my localhost. I want to see what is the http referer from stackoverflow.

  1. http://localhost/shareby/webserver/?1HL2BtujxnB6

  2. same url but using link creator

RSpec-like output in Grails test-app command

Is there an option to Grails 3.2.4's test-app command that allows you to see a console output similar to that BDD of Rspec?

In Rspec, I would use the option --format documentation.

How to begin testing in android

I have some experience with Android but I'm pretty new to testing. I've gone over the docs on the developer site but it feels like I'm missing some fundamentals on how to test code. I'm having trouble even writing simple local unit tests. Is there a staple resource that can get me started on the topic from a high-level methodology perspective to testing examples?

Unsatisfied dependencies with Weld during integration testing

I am able to deploy a RESTEasy application working well with Weld (meaning my CDI works) but I am having some trouble with my integration tests. I get this error:

org.jboss.weld.exceptions.DeploymentException:
WELD-001408: Unsatisfied dependencies for type SomeService with qualifiers @Default

while testing:

@RunWith(WeldJUnit4Runner.class)
public class SomeServiceIT {

    @Inject
    private SomeService service;

    @Test
    public void test() {
        System.out.println(service);
    }
}

The last message in my logs is

DEBUG::WELD-000100: Weld initialized. Validating beans

Content of src/test/resources/META-INF/beans.xml:

<beans xmlns="http://ift.tt/19L2NlC"
    xmlns:xsi="http://ift.tt/ra1lAU"
    xsi:schemaLocation="http://ift.tt/19L2NlC http://ift.tt/18tV3H8"
    version="1.1" bean-discovery-mode="all">
</beans>

By the way I tried the cdi-unit library and it works, but I need to use my own WeldJUnit4Runner which is currently:

public class WeldJUnit4Runner extends BlockJUnit4ClassRunner {

    private final Weld weld;
    private final WeldContainer container;

    public WeldJUnit4Runner(Class<?> klass) throws InitializationError {
        super(klass);
        this.weld = new Weld();
        this.container = weld.initialize();
    }

    @Override
    protected Object createTest() throws Exception {
        return container.instance().select(getTestClass().getJavaClass()).get();
    }
}

I use weld-se 2.4.1.Final for testing.
Thanks.

MVC test with Spring Boot 1.4

This blog describes some of the test improvements in Spring Boot 1.4. Unfortunately it seems that some important informations are missing. What static import is required to use the methods get(), status() and content() from the following example?

@RunWith(SpringRunner.class)
@WebMvcTest(UserVehicleController.class)
public class UserVehicleControllerTests {

    @Autowired
    private MockMvc mvc;

    @MockBean
    private UserVehicleService userVehicleService;

    @Test
    public void getVehicleShouldReturnMakeAndModel() {
        given(this.userVehicleService.getVehicleDetails("sboot"))
            .willReturn(new VehicleDetails("Honda", "Civic"));

        this.mvc.perform(get("/sboot/vehicle")
            .accept(MediaType.TEXT_PLAIN))
            .andExpect(status().isOk())
            .andExpect(content().string("Honda Civic"));
    }
}

Testing methods that depend on each other

Let's say I have a simple data-structure Store with two methods: add and list_all (Example in python):

class Store:
    def __init__(self):
        self.data = []
    def add(self, item):
        self.data.append(item)
    def list_all(self):
        return list(self.data)

Testing its methods would look something like:

def test_add():
    store = Store()
    store.add("item1")
    items = store.list_all() 
    assert len(items) == 1
    assert items[0] == "item1"

def test_list_all():
    store = Store()
    store.add("item1")
    items = store.list_all() 
    assert len(items) == 1
    assert items[0] == "item1"

Well these tests are awkward, they have literally the same body. To test the list_all method, I have to assume that add already works correctly, and to test add I have to use list_all to check the state of the Store. How do you test these kind of methods? Do you just write a single test case and say "this proves that both methods work fine"?

PS: It's a theoretical question. I am working on testing a complex system, and couldn't find where to start a bottom-up approach, because of such problems.

Should you do an assert statement inside a loop?

When we have multiple values for which we would like to test a given method should we loop through the values in a single test?

Or is it incorrect has in case of failure it might be harder to identify the cause?

Something like this:

testSomething(){
    List myValues = {'value1', 'value2', 'value3', ...}

    for(value: myValues){
        assertTrue(Something(Value))
    }
}

Android gradlew Project 'C' not found in root project where C is a drive name

I would really appreciate if anyone here has a solution to this issue.

I am going through Google Automated Performance Testing Codelab. At point 7 the step is to run this command:

gradlew :app:assembleDebug :app:assembleDebugAndroidTest :app:installDebug :app:installDebugAndroidTest %ANDROID_HOME%\tools\monkeyrunner run_perf_tests.py .\ "device_id_here"

My %ANDROID_HOME% points to "C:\Users\myname\AppData\Local\Android\sdk1\platform-tools" and echoes just fine in command line.

After running this command I get this error:

Project 'C' not found in root project 'android-perf-testing'.

What I have tried:

  • moving the monkeyrunner in the project folder
  • moving the monkeyrunner in different location
  • using absolute path
  • using absolute path with forward and backward slash
  • using java double slash in the path
  • creating an environment variable with full path to monkeyrunner and using it instead of %ANDROID_HOME%
  • typing "monkeyrunner.bat" in the path instead of "monkeyrunner"
  • updating buildToolsVersion to "24.0.3"

The stacktrace looks like this:

Exception is:
org.gradle.execution.taskpath.ProjectFinderByTaskPath$ProjectLookupException: Project 'C' not found in root project 'an
droid-perf-testing'.
        at org.gradle.execution.taskpath.ProjectFinderByTaskPath.findProject(ProjectFinderByTaskPath.java:47)
        at org.gradle.execution.taskpath.TaskPathResolver.resolvePath(TaskPathResolver.java:49)
        at org.gradle.execution.TaskSelector.getSelection(TaskSelector.java:79)
        at org.gradle.execution.TaskSelector.getSelection(TaskSelector.java:75)
        at org.gradle.execution.commandline.CommandLineTaskParser.parseTasks(CommandLineTaskParser.java:42)
        at org.gradle.execution.TaskNameResolvingBuildConfigurationAction.configure(TaskNameResolvingBuildConfiguration
Action.java:44)
        at org.gradle.execution.DefaultBuildConfigurationActionExecuter.configure(DefaultBuildConfigurationActionExecut
er.java:48)
        at org.gradle.execution.DefaultBuildConfigurationActionExecuter.access$000(DefaultBuildConfigurationActionExecu
ter.java:25)
        at org.gradle.execution.DefaultBuildConfigurationActionExecuter$1.proceed(DefaultBuildConfigurationActionExecut
er.java:54)
        at org.gradle.execution.DefaultTasksBuildExecutionAction.configure(DefaultTasksBuildExecutionAction.java:44)
        at org.gradle.execution.DefaultBuildConfigurationActionExecuter.configure(DefaultBuildConfigurationActionExecut
er.java:48)
        at org.gradle.execution.DefaultBuildConfigurationActionExecuter.access$000(DefaultBuildConfigurationActionExecu
ter.java:25)
        at org.gradle.execution.DefaultBuildConfigurationActionExecuter$1.proceed(DefaultBuildConfigurationActionExecut
er.java:54)
        at org.gradle.execution.ExcludedTaskFilteringBuildConfigurationAction.configure(ExcludedTaskFilteringBuildConfi
gurationAction.java:47)
        at org.gradle.execution.DefaultBuildConfigurationActionExecuter.configure(DefaultBuildConfigurationActionExecut
er.java:48)
        at org.gradle.execution.DefaultBuildConfigurationActionExecuter.select(DefaultBuildConfigurationActionExecuter.
java:36)
        at org.gradle.initialization.DefaultGradleLauncher$3.run(DefaultGradleLauncher.java:142)
        at org.gradle.internal.Factories$1.create(Factories.java:22)
        at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:91)
        at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:53)
        at org.gradle.initialization.DefaultGradleLauncher.doBuildStages(DefaultGradleLauncher.java:139)
        at org.gradle.initialization.DefaultGradleLauncher.access$200(DefaultGradleLauncher.java:32)
        at org.gradle.initialization.DefaultGradleLauncher$1.create(DefaultGradleLauncher.java:98)
        at org.gradle.initialization.DefaultGradleLauncher$1.create(DefaultGradleLauncher.java:92)
        at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:91)
        at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:63)
        at org.gradle.initialization.DefaultGradleLauncher.doBuild(DefaultGradleLauncher.java:92)
        at org.gradle.initialization.DefaultGradleLauncher.run(DefaultGradleLauncher.java:83)
        at org.gradle.launcher.exec.InProcessBuildActionExecuter$DefaultBuildController.run(InProcessBuildActionExecute
r.java:99)
        at org.gradle.tooling.internal.provider.ExecuteBuildActionRunner.run(ExecuteBuildActionRunner.java:28)
        at org.gradle.launcher.exec.ChainingBuildActionRunner.run(ChainingBuildActionRunner.java:35)
        at org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:48)
        at org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:30)
        at org.gradle.launcher.exec.ContinuousBuildActionExecuter.execute(ContinuousBuildActionExecuter.java:81)
        at org.gradle.launcher.exec.ContinuousBuildActionExecuter.execute(ContinuousBuildActionExecuter.java:46)
        at org.gradle.launcher.exec.DaemonUsageSuggestingBuildActionExecuter.execute(DaemonUsageSuggestingBuildActionEx
ecuter.java:51)
        at org.gradle.launcher.exec.DaemonUsageSuggestingBuildActionExecuter.execute(DaemonUsageSuggestingBuildActionEx
ecuter.java:28)
        at org.gradle.launcher.cli.RunBuildAction.run(RunBuildAction.java:43)
        at org.gradle.internal.Actions$RunnableActionAdapter.execute(Actions.java:173)
        at org.gradle.launcher.cli.CommandLineActionFactory$ParseAndBuildAction.execute(CommandLineActionFactory.java:2
39)
        at org.gradle.launcher.cli.CommandLineActionFactory$ParseAndBuildAction.execute(CommandLineActionFactory.java:2
12)
        at org.gradle.launcher.cli.JavaRuntimeValidationAction.execute(JavaRuntimeValidationAction.java:35)
        at org.gradle.launcher.cli.JavaRuntimeValidationAction.execute(JavaRuntimeValidationAction.java:24)
        at org.gradle.launcher.cli.ExceptionReportingAction.execute(ExceptionReportingAction.java:33)
        at org.gradle.launcher.cli.ExceptionReportingAction.execute(ExceptionReportingAction.java:22)
        at org.gradle.launcher.cli.CommandLineActionFactory$WithLogging.execute(CommandLineActionFactory.java:205)
        at org.gradle.launcher.cli.CommandLineActionFactory$WithLogging.execute(CommandLineActionFactory.java:169)
        at org.gradle.launcher.Main.doAction(Main.java:33)
        at org.gradle.launcher.bootstrap.EntryPoint.run(EntryPoint.java:45)
        at org.gradle.launcher.bootstrap.ProcessBootstrap.runNoExit(ProcessBootstrap.java:55)
        at org.gradle.launcher.bootstrap.ProcessBootstrap.run(ProcessBootstrap.java:36)
        at org.gradle.launcher.GradleMain.main(GradleMain.java:23)
        at org.gradle.wrapper.BootstrapMainStarter.start(BootstrapMainStarter.java:33)
        at org.gradle.wrapper.WrapperExecutor.execute(WrapperExecutor.java:130)
        at org.gradle.wrapper.GradleWrapperMain.main(GradleWrapperMain.java:48)

Thanks, Peter