samedi 31 mars 2018

Automate C++ program testing having input file and output file

Asume that i have a file with inputs for my C++ program. I would like to automatically use them on my C++ program and after that check the output from my program with the output's file.

I'm using CLion IDE.

Mandatory code block for inserting pastebin links

How to mannually test many checkboxes? (Or anything else)

I am new to manual testing. I understand that if I have to test two checkboxexls somewhere I have to test all the possible cases(when both are selected, when none are selected, when one of them is selected), but what if I had a larger number of checkboxes, like 100? How could that be manually tested?

Do testers use Decision Table and State Transition techniques nowadays?

Do testers use the Decision Table and State Transition techniques nowadays? I've asked my friend about them and he said they a thing of the past. If so, what are the current techniques that the manual testers are using?

vendredi 30 mars 2018

Rails Testing ArgumentError: When assigning attributes, you must pass a hash as an argument.

There is something wrong with my Rails testing setup. Right now, when I have data in my fixture, Rails does not recognize it as a hash.

require 'test_helper'

class TravelerTest < ActiveSupport::TestCase

  def setup
    @traveler = Traveler.new(:one)
  end

This is where I am getting the following error: "ArgumentError: When assigning attributes, you must pass a hash as an argument. test/models/traveler_test.rb:6:in `setup'"

My traveler.yml looks like this:

one:
  firstname: John
  lastname: Smith
  email: example@email.com
  encrypted_password: '123456'
  confirmed_at: 2016-01-02 08:31:23
  confirmation_sent_at: 2016-01-02 08:30:59

two: {}
# column: value

Test subscribing to Location in angular 2 with karma+jasmine (this.location.subscribe)

I am subscribing to the angular Location service in my component as such:

this.location.subscribe((ev:PopStateEvent) => {
    this.lastPoppedUrl = ev.url;
});

I'd like to be able to test it along with the rest of my component.

Right now I have this stub in my component.spec.ts file

let locationStub: Partial<Location>;
    locationStub = {
}

and am configuring it into my testbed as a provider:

{provide: Location, useValue: locationStub }

When I run ng test I get this error this.location.subscribe is not a function.

How can I create a stub or spy that will allow me to pass the .subscribe functionality of Location.

Here is a similar question on testing Location, but it is referring to functions within Location, not specifically subscribing to Location.

Any help is much appreciated.

Patterns for save data between tests in a sequence of test

Which patterns can I use for API testing when I need to get some data and use it in other requests?

Next case for example:

def test_item_get():
    r = get_json('/item')
    assert r.status_code == 200


def test_item_update():
    r = get_json('/item')
    assert r.status_code == 200

    item_uuid = r.json[0]['uuid']
    assert is_uuid(item_uuid)

    r = put_json('/item/{}'.format(item_uuid), {'description': 'New desc'})
    assert r.status_code == 200


def test_item_manager():
    r = get_json('/item')
    assert r.status_code == 200

    item_uuid = r.json[0]['uuid']
    assert is_uuid(item_uuid)

    r = put_json('/item/{}'.format(item_uuid), {'description': 'New desc'})
    assert r.status_code == 200

    r = get_json('/item/{}'.format(item_uuid))
    assert r.status_code == 200
    assert r.json['description'] = 'New desc'

    r = delete_json('/item/{}'.format(item_uuid))
    assert r.status_code == 200
    assert r.json['result'] == True

    r = delete_json('/item/{}'.format(item_uuid))
    assert r.status_code == 404

It looks like I should to divide the test_item_manager into smaller parts, but I'm not sure which way to choose.

Perfectly, if there are ways with pytest or unittest, but other testing modules or even a link to the source code with the solution of similar tasks will be nice.

Qt: get non-writable location

I'm writing test for my application where user gives path to some folder and application creates file and write some stuff there. And I want to check correct application behaviour when user inputs some non-writable folder as input.

So I need cross-platform way to get non-writable existing folder using Qt. Is it possible?

testing methods that use android api

first post so sorry if I messed up. I am trying to write a test for a method that uses android.graphics.path, I am only using Junit (I think) and every time I try to execute the tests I get an error: java.lang.RuntimeException: Method moveTo in android.graphics.Path not mocked, I get a very similar error when testing methods that use graphics.Point, I don't know if i have missed something or if graphics objects cant be used in testing? (though the latter seems unlikely). any help on how I should be writing this test would be really appreciated!

@Test

public void createHorizontalLinesFromVerticesReturnsAnArrayListOfTheRightSize() {
    Grid grid = new Grid();
    ArrayList a = new ArrayList();
    a.add(new Coordinate(0 ,0));
    a.add(new Coordinate(333, 0));
    a.add(new Coordinate(666, 0));
    a.add(new Coordinate(999, 0));
    assertThat(grid.horizontalPaths.size(), is(GlobalParameters.getHorizontalLines()));
}

public ArrayListcreateHorizontalLinesFromVerticies(ArrayList list){

    ArrayList<Path> p = new ArrayList<>();
    Path path;
    for(Coordinate c : list){
        path = new Path();
        path.moveTo(c.getX(), c.getY());
        path.lineTo(c.getX(), Constants.getScreenHeight());
        p.add(path);
    }
    return p;
}

Statistical test to show one group fall between other two

I have three ordered groups (say one, two and three). Each group has 10 values. I want to test values of two fall between one and three. Is there a test capable of doing this? I look at Cuzick, Jonckheere, Page trend test but their alternative is not what I am looking for.

Testing tool to enforce consumers of my API to comply to my requirements

Currently I'm working on a POC where I try to decouple my frontend form my backend development, making the frontend development platform-independent.

What I'd like to achieve is the following:

A service with a REST API is created that has endpoints to render out HTML. Each endpoint has its own criteria as which properties need to be on the HTTP request. I'd like to ensure that the backend developers have to comply to the needs of the service, so they can be assured that whatever HTML they retrieve from it will be fine.

I've looked into pact, but it's not exactly what I need, but it sort of sums up what I'd like to achieve: a kind of contract that can be defined at the level of the service and gets shared with the backend developers, so that they know what parameters to fill in when requesting HTML.

It seems like a bit of a combination between Swagger and Pact.

If any of you have an idea to point me to some kind of testing tool that could provide me with this, I will be more than glad to look into it!

Kind regards

can't runing testNG or JUnit test

i worked in project witch i implemented a simple feature, stepdefinition class and the runner and want to test it using testJUnit or when i run TestNG.xml.

but i have this problem

You can implement missing steps with the snippets below:

@Given("^sample feature file is ready$")
public void sample_feature_file_is_ready() throws Throwable {
    // Write code here that turns the phrase above into concrete actions
    throw new PendingException();
}

@When("^I run the feature file$")
public void i_run_the_feature_file() throws Throwable {
    // Write code here that turns the phrase above into concrete actions
    throw new PendingException();
}

@Then("^run should be successful$")
public void run_should_be_successful() throws Throwable {
    // Write code here that turns the phrase above into concrete actions
    throw new PendingException();
}

my feature is :

@smokeTest
Feature: To test my cucumber test is running
I want to run a sample feature file.

Scenario: cucumber setup

Given sample feature file is ready
When I run the feature file
Then run should be successful

My stepdefinition is :

public class Test_Steps {

      @Given("^sample feature file is ready$")
      public void givenStatment(){
            System.out.println("Given statement executed successfully");
      }

      @When("^I run the feature file$")
      public void whenStatement(){
         System.out.println("When statement execueted successfully");
      }

     @Then("^run should be successful$")
      public void thenStatment(){
         System.out.println("Then statement executed successfully");
      }

My runner is :

@RunWith(Cucumber.class) @CucumberOptions(

     features = {"D:\\feat\\user"},
     glue={"stepsdef"},
     tags= "@smokeTest")
public class RunnerTestInChrome extends AbstractTestNGCucumberTests {

}

how can i know that the stepdefinition class is related to my features! i don't understand the problem i implemented the methods scripts

Who should do XCode UI testing

I am not sure this question belong here, This is a software project management question.

Don't down vote, Let us sort out in peace if I need to post it somewhere else.

I was recently trying to include XCode UI test in our iOS project. We have a testing team who does and run automated UI testing every night, they user Appium.

Since I feel XCode UI Tests is better than Appium.

Should We Developers do all the UI tests in XCode and is it the job of the the QA guys to do it?

Should XCode UI Tests better than Appium Tests?

Should both Xcode UI Tests and Appium Tests be done?

How to specify source jar for Cucumber stepdef

I have some scenarios which call a stepdef from a jar included as dependency.

Now, I want to maintain 2 separate versions for same stepdef into 2 different jars.

So, I need few scenarios to use version 1 and others to use version 2 of the stepdef.

How do I do this with cucumberoptions, specifically mentioning jar source in the glue.

Embedded Software Release Life Cycle

I want to ask a rather general question related to the firmware/software release life cycle to see what are others view on this one. The context is this: you have a client for which you are developing a software for a device that is under development as well (hardware wise). Because the client also needs to develop his part (e.g.: Windows application, mobile phone app, etc) you often release new versions and send them to the client. My questions are:

  • how much testing should be done prior to the release to the client? Since this is not a "public" release but under development I would say make sure you don't have obvious bugs(the ones that pop within the first two minutes of testing the device) but don't involve a separate test team just for that

  • You finish development. You release your first "public" version, with all feature implemented. Knowing that there is no bug-free software, how do you decide when the software development has ended (in relation to the client)? I.e. you deliver a software with all the requested functions implemented. The client comes back with found bugs every now and then. How do you treat this bugs? Do you charge for them or do you consider them to be part of the initial development?

Thank you!

jeudi 29 mars 2018

Any reasons to avoid using random IDs in a FactoryBot factory definition?

I've often heard not to use random data when writing tests, which seems reasonable for most data.

However, I can't think of any reason why a random ID for records would be bad?

For example:

FactoryGirl.define do
  factory :dog do
    id { rand(100_000) }
    name "Sparky"
    legs 4
   end
  end
end

As long as I make sure any associated records use the correct, randomly-assigned ID, I don't see how this could be an issue.

PuLP not working with Travis CI?

I use PuLP in my project and all tests pass on my local machine. But when Travis CI runs the test suite all the tests involving PuLP fail. Here's the summary (all the tests containing exact use PuLP):

============================= test session starts ==============================
platform linux -- Python 3.4.6, pytest-3.0.7, py-1.4.33, pluggy-0.4.0
rootdir: /home/travis/build/yogabonito/region, inifile: pytest.ini
collected 134 items 
region/csgraph_utils.py ...
region/objective_function.py ..
region/util.py .............
region/max_p_regions/tests/test_exact.py FFFFFFFFFFFF
region/max_p_regions/tests/test_heu.py ............
region/p_regions/tests/test_azp.py ............
region/p_regions/tests/test_azp_basic_tabu.py ............
region/p_regions/tests/test_azp_reactive_tabu.py ............
region/p_regions/tests/test_azp_simulated_annealing.py ............
region/p_regions/tests/test_exact.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
region/tests/test_util.py .......
region/tests/util.py .

Here's an example failure:

=================================== FAILURES ===================================
___________________________ test_scipy_sparse_matrix ___________________________
    def test_scipy_sparse_matrix():
        cluster_object = MaxPRegionsExact()
        cluster_object.fit_from_scipy_sparse_matrix(adj, attr,
                                                    spatially_extensive_attr,
>                                                   threshold=threshold)
region/max_p_regions/tests/test_exact.py:23: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
region/max_p_regions/exact.py:153: in fit_from_scipy_sparse_matrix
    prob.solve(solver)
../../../virtualenv/python3.4.6/lib/python3.4/site-packages/pulp/pulp.py:1664: in solve
    status = solver.actualSolve(self, **kwargs)
../../../virtualenv/python3.4.6/lib/python3.4/site-packages/pulp/solvers.py:1362: in actualSolve
    return self.solve_CBC(lp, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
self = <pulp.solvers.COIN_CMD object at 0x7ff3e76ba9b0>
lp = Max-p-Regions:
MINIMIZE
50.29999999999968*t_(0,_1) + 80.59999999999991*t_(0,_2) + 140.19999999999993*t_(0,_3) + 60.699... <= x_(8,_8,_5) <= 1 Integer
0 <= x_(8,_8,_6) <= 1 Integer
0 <= x_(8,_8,_7) <= 1 Integer
0 <= x_(8,_8,_8) <= 1 Integer
use_mps = True
    def solve_CBC(self, lp, use_mps=True):
        """Solve a MIP problem using CBC"""
        if not self.executable(self.path):
            raise PulpSolverError("Pulp: cannot execute %s cwd: %s"%(self.path,
>                                  os.getcwd()))
E           pulp.solvers.PulpSolverError: Pulp: cannot execute cbc cwd: /home/travis/build/yogabonito/region
../../../virtualenv/python3.4.6/lib/python3.4/site-packages/pulp/solvers.py:1372: PulpSolverError
----------------------------- Captured stdout call -----------------------------
start solving with <pulp.solvers.COIN_CMD object at 0x7ff3e76ba9b0>

It looks like Travis cannot find the CBC CMD solver although according to the PuLP-docs it is "included" / "bundled with pulp". (On my local machine I did not have to install the CBC CMD solver. It was installed automatically with PuLP.)

My question is: How can I make Travis CI find the solver? Also interesting: Why did my problems using Travis CI occur?

Not able to run a Taurus performance test on an existing API

After installing Taurus on Win 10 machine and creating a new file for performance testing my API I get following error.

perf-test-config.yml

execution:
- concurrency: 100
  ramp-up: 1m
  hold-for: 2m
  scenario: helloworld-api-perf-test

scenarios:
  quick-test:
    requests:
    - https://helloworld-api.cfapps.io

Error Log:

> bzt perf-test-config.yml
15:32:18 INFO: Taurus CLI Tool v1.11.0
15:32:18 INFO: Starting with configs: ['perf-test-config.yml']
15:32:18 INFO: Configuring...
15:32:18 INFO: Artifacts dir: C:\Users\chandeln\MY-WORK\helloworld-api\2018-03-29_15-32-18.609453
15:32:18 WARNING: at path 'execution.0.scenario': scenario 'helloworld-api-perf-test' is used but isn't defined
15:32:18 INFO: Preparing...
15:32:19 WARNING: Could not find location at path: helloworld-api-perf-test
15:32:19 ERROR: Config Error: Scenario 'helloworld-api-perf-test' not found in scenarios: dict_keys(['quick-test'])
15:32:19 INFO: Post-processing...
15:32:19 INFO: Artifacts dir: C:\Users\chandeln\MY-WORK\helloworld-api\2018-03-29_15-32-18.609453
15:32:19 WARNING: Done performing with code: 1

Visual Studio skipping my tests

I've been debugging since last night, but out of the blue this morning Visual Studio started skipping the test I've been working on. I did update Visual Studio at some point. I also reverted to old versions where testing was performing correctly. Tried turning on/off Live Unit testing. The test in question only has a few lines, and I can't find any hint of an ignore property or any reason VS should skip the test. Any guesses?

Trying to get Laravel Dusk to behave with sqlite database

I'm trying to get Laravel Dusk to play nicely with an App i'm trying to test.

At the moment I can write to a test sqlite database but when I try to test a login form following the guidance it appears the details in the development database are being used instead.

Here's my test:

class LoginTest extends DuskTestCase
{

private $user;

use DatabaseMigrations;

public function setUp()
{
    parent::setUp();

    $this->user = factory(User::class)->create(['password' => bcrypt('secret')]);

}

/**
 * A Dusk test example.
 *
 * @return void
 * @throws \Exception
 * @throws \Throwable
 */
public function test_user_can_log_in()
{

    $this->browse(function (Browser $browser) {

        $browser->visit('/login')
            ->assertSee('Members sign in')
            ->type('email', $this->user->email)
            ->type('password', 'secret')
            ->driver->executeScript('window.scrollTo(0, 500);');

        $browser->press('Sign in')
            ->assertPathIs('/home');
    });
}
}

This test fails authentication as the user I've just created doesn't exist in the development Mysql database it is reading from.

I am able to see the user I've just created in the sqlite database and can query that user exists

What am I doing wrong? Does Laravel Auth do something to override the connections?

Thank you

how to enable file api when writing e2e tests with protractor

I am writing test case with protractor. below is the config i use

export let config: Config = {
  framework: "jasmine",
  specs: ["e2e-spec.js"],
  seleniumAddress: "http://localhost:4444/wd/hub",
  noGlobals: true,
  capabilities: {
    browserName: "chrome",
  },
  allScriptsTimeout: 200000
};

but when i try to access file api, e.g fs.write, it complain fs is undefined.

how can I enable fs module within my test case?

How to upload multiple files with Robot Framework

I am currently doing automation testing with an app on a browser using Robot Framework. Uploading 1 file at a time is easy using Choose File keyword. But how do you upload multiple files? In my case, I need to select all the files in that directory and upload them.

Test bench: verilog task control

I have a strange question with regards to task control in verilog test bench, so please bear with my attempt to trying to keep it to a simple form.

I have a task flow inside a file, for which I am not permitted to change anything. For example;

1) File: Faults

task examine; begin task a(); task b(); end

the explanations for task a and b are also inside the file-Fault for which I am not permitted to make any changes.

However I am using the above task in my test case 2) File : Testcase_1

task xyz; task examine; task hml; etc..

however there is a problem with task a. when task-a runs, there is a variable inside that task which is changing wrongly and I have to force the variable to follow the right formula somewhere else and correct it without doing the task correction in the Faults file.

I have access to another file called as File: utility where I can write the correct formula to force the variable to follow the new formula, get changed and then continue executing with task b and so on in my test case and produce the results.

But I dont know how can I stop task a in the middle and then change the variable(force it follow a new formula and get the right value) and then continue with task b and so on. Can anybody help me with this???

I am happy to provide additional info if required. Thanks in advance

Testing DashJS with Jest & Enzyme

I'm trying to write Jest tests for a React component which contains a DashJS media player. I'm using Enzyme's mount method to try and test the component, but it seems that the DashJS media player fails to mount properly.

In my componentDidMount method, I have the following code:

    this.videoManager = dashjs.MediaPlayer().create();
    this.videoManager.initialize(this.videoPlayer, videoUrl, true);
    // Where this.videoPlayer is a reference to an HTML <video> element
    this.videoManager.preload();

The last line (this.videoManager.preload();) produces the following error:

You must first call attachSource() with a valid source before calling this method thrown

When I run the component it works normally - it's only the testing I'm having issues with. I haven't been able to find any related issues/solutions online.

I'm using the following versions of each relevant package:

  • react: "16.2.0"
  • dashjs: "2.6.7"
  • jest: "22.3.0"
  • enzyme: "3.3.0"
  • enzyme-adapter-react-16: "1.1.1"

Any help will be appreciated!

Pass strings to running program in c from terminal

I want to test my program which is structured this way:

// in main
do {
    printf("insert move \n");
    fgets(move, 7, stdin);

     // stuff

} while (/* condition */);

Basically it has a loop where it reads the user input and do stuff...

I want my test to pass to this loop a set of n strings.

how can i do?

Skipping test during maven compile

I am trying to stop maven from compiling the Tests. I have already tried using -DskipTests and -Dmaven.test.skip=true . But neither stops maven from compiling the tests.

 mvn clean install -Dmaven.test.skip=true -DskipTests -T1C

I get errors in the tests.

laravel testing requires multiple databases trying to use sqlite duplicate tables

My testing journey continues...

In an app I have multiple databases 1. which is for the backend content with users for the back end and other stuff 2. the front end and users for the front end.

To carry out a test I need to create a user for the front end and test against content in the backend.

I'm trying to configure Phpunit and laravel to use two databases in memory using sqlite and I'm struggling.

I have two migration files for each database and have my normal set up as follows in database.php

'sqlite_testing_memory' => [
        'driver' => 'sqlite',
        'database' => ':memory:',
        'prefix' => '',
    ],

in my phpunit.xml I have created the following:

    <env name="APP_ENV" value="testing"/>
    <env name="CACHE_DRIVER" value="array"/>
    <env name="SESSION_DRIVER" value="array"/>
    <env name="QUEUE_DRIVER" value="sync"/>
    <env name="DB_CONNECTION" value="sqlite_testing_memory"/>
    <env name="DB_DEFAULT" value="sqlite_testing_memory" />
    <env name="DB_DATABASE" value=":memory:"/>
    <env name="DB_DATABASE_2" value=":memory:"/>

Each time I try and migrate the databases I get conflicts due to tables already existing e.g users from the back end conflicting with users from the front end.

I understand I can use Schema::connection->('connection_name') to specify the connection in the migration file but still get the conflicts.

Is it possible to have multiple databases in an sqlite in memory.

How can I get my tests to use multiple databases or should I consider reverting to a mysql testing database?

Help appreciated

mercredi 28 mars 2018

Travis pr failed, push passed

The branch was previously functional, then merged to master and the builds on master failed. Master was reverted, then master was merged into this branch and some fixes were made. When attempting to merge back to master, the build failed again with the following error. The push passed, the pr failed.

* What went wrong:
Could not resolve all files for configuration ':app:debugCompileClasspath'.
> Could not find com.squareup.leakcanary:leakcanary-android:1.5.4.

The travis.yml file:

sudo: false
language: android
android:
components:
- build-tools-27.0.2
- android-27
- sys-img-armeabi-v7a-android-27

jdk:
- oraclejdk8


before_install:
- yes | sdkmanager "platforms;android-27"
- chmod +x gradlew


#First app is built then unit tests are run
jobs:
include:
- stage: build
  async: true
  script: ./gradlew assemble
- stage: test
  async: true
  script: ./gradlew -w runUnitTests


  notifications:
  email:
  recipients:
  - email@me.com

  on_success: always # default: change
  on_failure: always # default: always 

Best way to pass data to a Mocha test that's run programmatically?

Trying to solve a problem where I can't seem to pass dynamically gathered data to Mocha tests.

Here is the logic of my application:

  1. Client submits their Github url. Request is made to Express/Node application.

  2. Express/Node application takes repo and username and makes request to Github API for data and adds the content of the files to an object as base64.

  3. The object with the files are passed to the relevant test files and then executed.

  4. The results are processed and preliminary grades are created. These are then sent back to the client.

Here is what a test file can look like:

    const chai = require('chai');
    const chaiSubset = require('chai-subset');
    chai.use(chaiSubset);
    const expect = chai.expect;
    const base64 = require('base-64');
    const HTML_CONTENT = require('../../00-sandbox-files/basic-portfolio-solution.json').html;
    const CSS_CONTENT = require('../../00-sandbox-files/basic-portfolio-solution.json').css;
    const decodedCSS = base64.decode(CSS_CONTENT[1].content);
    const cheerio = require('cheerio');
    const juice = require('juice');

    let decodedHTMLcontact;
    let decodedHTMLindex;
    let decodedHTMLportfolio;

    for (const obj in HTML_CONTENT) {
      if (HTML_CONTENT[obj].path == "contact.html") {
        decodedHTMLcontact = base64.decode(HTML_CONTENT[obj].content);
      } else if (HTML_CONTENT[obj].path == "index.html") {
        decodedHTMLindex = base64.decode(HTML_CONTENT[obj].content);
      } else if (HTML_CONTENT[obj].path == "portfolio.html") {
        decodedHTMLportfolio = base64.decode(HTML_CONTENT[obj].content);
      }
    }

    tests = function (html, css) {
      describe('HTML Elements tests that should pass for contact.html', function () {
        let $ = cheerio.load(decodedHTMLcontact);
        describe('HTML Elements that should exist in contact.html', function () {
          it('should contain a header element', function () {
            expect($('body').find('header').length).to.equal(1);
          });
          it('should contain a section element', function () {
            expect($('body').find('section').length).to.equal(1);
          });
          it('should contain several anchor elements', function () {
            expect($('nav').find('a').length).to.be.at.least(3, 'You need an additional anchor elements for your navigation elements');
          });
          it('should contain an h1 element', function () {
            expect($('body').find('h1').length).to.equal(1);
          });
          it('should contain a form element', function () {
            expect($('body').find('form').length).to.equal(1);
          });
          it('should contain a footer element', function () {
            expect($('body').find('footer').length).to.equal(1);
          });
        });

Here is the execution file for the Mocha tests:

const Mocha = require('mocha');

// Instantiate a Mocha instance.
const mocha = new Mocha();
const HW_PORTFOLIO_PATH = './server/05-run-testing-suite/HW-Week1-portfolio-wireframe/HW-1-portfolio.js';

function homeworkRouter(contentObj, repo) {
  switch (repo) {
    case "Basic-Portfolio":
      mocha.addFile(HW_PORTFOLIO_PATH);
      break;
    case "HW-Wireframe":
      mocha.addFile('./server/05-run-testing-suite/HW-Week1-portfolio-wireframe/HW-1-wireframe.js');
      break;
    default:
      console.log("No homework provided");
      break;
  }
}

module.exports = {

  // Run the tests and have info about what can be returned
  runTests: function(contentObj, repo) {
    homeworkRouter(contentObj, repo);
    console.log("Content Object", contentObj);
    console.log("Repo", repo);
    mocha.run()
    .on('fail', function (test, err) {
      console.log('Test fail');
      console.log(err);
    })
    .on('end', function () {
      console.log('All done');
    });
  }
}

React Native - Jest testing with firebase

I am a newbie in Firebase. I have read that it is good to unit test as you go. So this is what I have been trying to do lately. I currently have a problem when trying to test render on a class that uses Firebase. This is the following code I have been trying to fix:

import 'react-native';
import React from 'react';
import MainTab from '../../components/MainTab';
import Enzyme from 'enzyme';

import renderer from 'react-test-renderer';

it('renders correctly', () => {
  const tree = renderer.create(
    <MainTab/>
    ).toJSON();
  expect(tree).toMatchSnapshot();
});

However, I am getting the following error on the test when trying to obtain the current user's id from firebase: enter image description here

Has anyone stumbled into this error before? P.S. Don't go hard on me if this is something really vacuous, just trying to learn.

Running cucumber test script in parallel in different browser

I hava a project with many features, i want to run a test in different browser in parallel using cucumber-jvm-plugin

In my POM.XML i add the 2 plugins of cucumber jvm and maver surefire

i create the runnerClass and add:

@RunWith(Cucumber.class)
@CucumberOptions(
features = {....},
glue={...})
public class RunnerTest extends AbstractTestNGCucumberTests{}

Now, i am not able to run the test How can i run the different features in browser in parallel using cucumber-jvm or selenium grid

How does Android Studio get code coverage?

As stated in the docs and other SO questions Android Studio provides a way to run your tests and get the code coverage (class, method and line level).

What framework or tool does it use internally to get the coverage ?

Is it possible to have 100% branch coverage without 100% statement coverage?

I am trying to think of examples where 100% branch coverage is reached but statement coverage is less than 100%.

So far I can't think of any way possible for this to happen. Is there something obvious I am missing or am I over thinking this simple problem?

Get access to Angular service instance from JavaScript code

What I'm trying to do is have some testing assertions based on the data in the Angular service, i.e. we're trying to create E2E tests and the tool we're using allows us to execute arbitrary JavaScript code for assertions, so for that I need to know if it's possible to get access to the Angular service instance.

How can I get access to an Angular service instance from plain JS code?

That is, if my Angular app is deployed, and I open the app in the browser, then open Chrome DevTools, can I get access to the service instance of the my Angular service that was provided to all components?

I know it's possible to get access to your component by through ng.probe($0) etc. but not sure about services.

From what I have searched so far, it seems like we have to do use the Injector class and then use it's .get() method to get access to one of the Angular service instances but I'm not sure how would I get access to the Injector class/instance itself?

Here's what I tried: ng.probe($0) ($0 being the <app-root> of my app) and then I see that the return value has an .injector property, I tried to call ng.probe($0).injector.get('MyServiceName') and got an error for: Uncaught Error: No provider for MyServiceName!.

(Even though I'm trying ng.probe above, I would love to know how to get access to the injector without ng.probe because during testing execution, I don't think I'll be able to do ng.probe($0))

So I'm not sure if I'm trying to access it the right way? Any ideas?

I'm using Angular 4.

Cucumber-Protractor tests skipping over assertions, falsely passing

So I am brand new at Javascript, my only language before this was Ruby. I have written API tests with Cucumber and Ruby for years, but now I am trying to figure out UI tests for an angular app using Protractor and Cucumber.js. I have the framework set up and the test steps run are passing, but falsely so.

Here is a snippet of my step definitions, with a few edits to change data in assertions and the string for the assertion is intentionally wrong to trigger a failure. They run and are passing, but only because it ignores the assertion. I don't see it actually doing anything in the browser, but if I put in console.log messages I do see them in the console. However, if I comment out the last callback, then I can see it run in the browser and it actually checks the assertions and fails as it should.

Cucumber doesn't require callbacks, and removing them results in it running in exactly the same way... only I can't comment out a callback of course and watch it run like I mentioned above.

And if I don't put that timeout in the first step, then the whole thing errors out at the first step with "Error: function timed out after 5000 milliseconds"

Why?!? Thanks!!

Protractor 5.3.0 with Cucumber 4.0.0 and protractor-cucumber-framework 4.2.0

Given('I am on the home page', {timeout: 30000}, (callback) => {
    browser.waitForAngularEnabled(false);
    browser.get(browser.params.env.int).then(callback);
});

Then('the log in form is displayed', callback => {
    expect(element(by.id('email')).isPresent()).to.eventually.be.true;
    expect(element(by.id('password')).isPresent()).to.eventually.be.true;
    callback();
});

When('I enter my user name', callback => {
    element(by.name('email')).sendKeys('my_addy@example.com');
    expect(element(by.id('email')).getAttribute('value')).to.eventually.equal('something that does match');
    callback();
});

When('I enter my password', callback => {
    element(by.name('password')).sendKeys('blah');
    callback();
});

When('I click the log in button', callback => {
    element(by.buttonText('Log In')).click();
    callback();
});

Then('I am on the X page', callback => {
    expect(browser.getCurrentUrl()).to.eventually.contains('Y');
    // callback();
});

How to confiugure maven-enforcer-plugin to exclude some rule in test scope?

How to confiugure maven-enforcer-plugin to exclude some rule in test scope?

I have such a configuration:

<executions>
  <execution>
    <id>enforce-bytecode-version</id>
    <goals>
      <goal>enforce</goal>
    </goals>
    <configuration>
      <rules>
        <enforceBytecodeVersion>
          <maxJdkVersion>1.7</maxJdkVersion>
        </enforceBytecodeVersion>
      </rules>
      <fail>true</fail>
    </configuration>
  </execution>
</executions>

But I would like to check JDK version only for regular code and not for test scope.

How to run parallel script cucumber using selenium grid?

I am actually work in a project, that i have different cucumber script implemented i want to run them by using selenium Grid. Is this possible ? Any suggestion for documentation or example to try it ?

Unable to run code in Selenium

When I run the code it shows this error in selenium.

How can I run this code?

1522226684482   geckodriver INFO    geckodriver 0.20.0 (0ac3698a74a7b1a742682b8e704f1f418df827ed 2018-03-13)
1522226684493   geckodriver INFO    Listening on 127.0.0.1:3573
Mar 28, 2018 2:14:44 PM org.openqa.selenium.remote.ProtocolHandshake createSession
INFO: Attempting bi-dialect session, assuming Postel's Law holds true on the remote end
1522226684943   mozrunner::runner   INFO    Running command: "C:\\Program Files\\Mozilla Firefox\\firefox.exe" "-marionette" "-profile" "C:\\Users\\VIRAJ~1.SGP\\AppData\\Local\\Temp\\rust_mozprofile.sLm6mdhHmFpE"
[GFX1]: Potential driver version mismatch ignored due to missing DLLs igd10umd32 v= and igd10iumd32 v=
1522226695582   Marionette  INFO    Enabled via --marionette
[GFX1]: Potential driver version mismatch ignored due to missing DLLs igd10umd32 v= and igd10iumd32 v=
1522226703662   Marionette  INFO    Listening on port 51725
[GFX1]: Potential driver version mismatch ignored due to missing DLLs igd10umd32 v= and igd10iumd32 v=
[GFX1]: Potential driver version mismatch ignored due to missing DLLs igd10umd32 v= and igd10iumd32 v=
Mar 28, 2018 2:15:06 PM org.openqa.selenium.remote.ProtocolHandshake createSession
INFO: Falling back to original OSS JSON Wire Protocol.
Mar 28, 2018 2:15:06 PM org.openqa.selenium.remote.ProtocolHandshake createSession
INFO: Falling back to straight W3C remote end connection
Exception in thread "main" **org.openqa.selenium.SessionNotCreatedException**: Unable to create new remote session. desired capabilities = Capabilities [{marionette=true, firefoxOptions=org.openqa.selenium.firefox.FirefoxOptions@9cfc36, browserName=firefox, moz:firefoxOptions=org.openqa.selenium.firefox.FirefoxOptions@9cfc36, version=, platform=ANY}], required capabilities = Capabilities [{}]
Build info: version: 'unknown', revision: '1969d75', time: '2016-10-18 09:43:45 -0700'
System info: host: 'testing6', ip: '192.168.6.239', os.name: 'Windows 7', os.arch: 'x86', os.version: '6.1', java.version: '1.8.0_121'
Driver info: driver.version: FirefoxDriver
    at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:91)
    at org.openqa.selenium.remote.HttpCommandExecutor.execute(HttpCommandExecutor.java:141)
    at org.openqa.selenium.remote.service.DriverCommandExecutor.execute(DriverCommandExecutor.java:82)
    at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:601)
    at org.openqa.selenium.remote.RemoteWebDriver.startSession(RemoteWebDriver.java:241)
    at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:128)
    at org.openqa.selenium.firefox.FirefoxDriver.<init>(FirefoxDriver.java:259)
    at org.openqa.selenium.firefox.FirefoxDriver.<init>(FirefoxDriver.java:247)
    at org.openqa.selenium.firefox.FirefoxDriver.<init>(FirefoxDriver.java:242)
    at org.openqa.selenium.firefox.FirefoxDriver.<init>(FirefoxDriver.java:135)
    at KPRt.Testkpr.main(Testkpr.java:16)

mardi 27 mars 2018

How do I mock a third party library inside a Django Rest Framework endpoint while testing?

The user creation endpoint in my app creates a firebase user (I have a chat room inside the app, powered by firebase) with the same id as my django user.

from django.contrib.auth.models import User
from rest_framework import generics
from rest_framework import permissions
from apps.core.services import firebase_chat

class UserCreate(generics.GenericAPIView):

    permission_classes = [permissions.AllowAny]

    @transaction.atomic
    def put(self, request):
        user = User.objects.create(**request.data)
        firebase_chat.create_user(user)

firebase_chat is a wrapper I created around the standard firebase library.

I'm writing my tests like recommended in DRF's guide:

from django.urls import reverse
from django.test import TestCase
from rest_framework.test import APIClient

class UserCreateTest(TestCase):

    def test_user_create__all_valid__user_created(self):
        client = APIClient()
        client.force_authenticate(user=User.objects.create(username='test'))
        response = client.put(reverse('user-create'))
        self.assertTrue(response.data['something'])

However, this leads to creating an actual user in Firebase. Not only this fails the test (the firebase library throws an exception), it also hits the actual firebase server, filling it with test data.

How can I either mock or disable the firebase library during my tests?

How to test if method that takes argument is throwing an exception in dart?

If I have

class ErrorThrower{
  void throwAnError(String argument){
    throw new Error();
  }
}

I want to test if throwAnError throws Exception, or more precisely an instance of Error

This is my code but it doesn't work

  test('', () {
    var errorThrower = new ErrorThrower();
    expect(errorThrower.throwAnError("string"), throwsException);
  });

How can I test this angularjs filter?

I found this AngularJS unique filter example online: https://tutorialedge.net/javascript/angularjs/removing-duplicates-from-ng-repeat/

And I'm trying to test it. I'm really close but I'm getting an error that I don't understand why. Any idea what I'm doing wrong?

describe('Unique Filter', function() {
  'use strict';

  var $filter;

  beforeEach(function () {
    module('resource');

    inject(function (_$filter_) {
      $filter = _$filter_;
    });
  });

  it('should only return unique values', function() {
    var list = [
      { 'name' : "ipad" },
      { 'name' : "ipad" },
      { 'name' : "ipad" },
      { 'name' : "ipod" },
      { 'name' : "iMac" },
      { 'name' : "iMac" },
      { 'name' : "iMac" },
      { 'name' : "iPhone" },
      { 'name' : "iWatch" },
      { 'name' : "iWatch" },
      { 'name' : "iWatch" },
      { 'name' : "iPeed" }];

    var resultList = [
      { 'name' : "ipad" },
      { 'name' : "iMac" },
      { 'name' : "iPhone" },
      { 'name' : "iWatch" },
      { 'name' : "iPeed" }];

    var result = $filter('unique');
    expect(result(list)).toBe(resultList);
  });
});

And the error I'm getting is:

PhantomJS 2.1.1 (Linux 0.0.0) Unique Filter should only return unique values FAILED
    Expected [ Object({ name: 'ipad' }) ] to be [ Object({ name: 'ipad' }), Object({ name: 'iMac' }), Object({ name: 'iPhone' }), Object({ name: 'iWatch' }), Object({ name: 'iPeed' }) ].
    test/spec/resource/resource.filter.js:37:30

What is the difference between Nightwatch and Capybara?

I'm using a Vue front-end with a Rails back-end and I'm looking at options for unit testing and integration tests. I'm considering Jest for unit tests, but I'm a little confused on the options for integration tests. What is the difference between Nightwatch and Capybara? Is it just that Nightwatch runs in Node and Capybara runs in Ruby?

Mockito mocked method is returning NULL

I am using Mockito and have tried to mock the below test class. Here the main class method createNewId() is getting the object by hitting dao class 'memberDao.findNext()'. I am trying to mock 'memberDao.findNext()' and return the object as shown in below code but it is returning as NULL.

Please let me know what am i doing wrong.

@RunWith(MockitoJUnitRunner.class)
public class MemberTest
{
    @InjectMocks
    private Member member;
    @Mock
    private MemberDao memberDao;

    @Before
    public void setUp()
    {
        MockitoAnnotations.initMocks(this);
        memberDao = Mockito.mock(MemberDao.class);

    }
    @Test
    public void createId() throws Exception
    {
        MembersIdDto id = new MembersIdDto();
        id.setId(id);
        when(memberDao.findNext()).thenReturn(id);
        verify(manager).createNewId().contains("967405286");
    }


    public class MainClass{
    @Resource
    MemberDao memberDao;

    public String createNewId()
    {
        MembersIdDto newId = memberDao.findNext();   
        Assert.notNull(newId, "newId is empty");
        String id = newId.getId();
        return id;
    }
    }

memberDao.findNext() is the line i am trying to mock.

Error is : java.lang.IllegalArgumentException: newId is empty

at org.springframework.util.Assert.notNull(Assert.java:134)
at MainClass.createNewId() (MainClass.java:20)

// Line 20 is "Assert.notNull(newId, "newId is empty");"

Targeting seperate parts of a URL in Optimizely with REGEX

I'm trying to target a set of URLs sharing the same template in Optimizely the https://chillisauce.com/hen/in-dublin/day

Specifically trying to target the hen/in- part and the /day part.

I've been testing this: /(hen)/in-.*/day[^/]+$

Although when testing the URL pattern in Optimizely it does not match.

Any ideas?

Thanks in advance

Charlie

Replicate logic or using existent in tests?

I have some concerns about writing too much logic in the spec tests.

So let's assume we have a student with statuses and steps

Student statuses starts from

pending -> learning -> graduated -> completed

and steps are:

nil -> learning_step1 -> learning_step2 -> learning_step3 -> monitoring_step1 -> monitoring_step2

With each step going forward a lot of things are happening depending where you are: e.g.

nil -> learning_step1

Student status changes to learning Writes an action history ( which is used by report stats ) Update a contact schedule

learning_step1 -> learning_step2

....the same...

and so ..... until

learning_step3 -> monitoring_step1

Student status changes to graduated Writes different action histories ( which is used by report stats ) Update a contact schedule

and when

monitoring_step2 -> there is no next step

Student status changes to completed Writes different action histories ( which is used by report stats ) Delete any contact schedule

So imagine that I need a test case of a completed student, I would have to think all the possibilities that may come and achieve that student as completed and also I can forget to write an action history and will mess with the reports.

Or ....

Using an already implemented class:

# assuming we have like in the above example 5 steps I do

StepManager.new(student).proceed # goes status learning and step1
StepManager.new(student).proceed
StepManager.new(student).proceed
StepManager.new(student).proceed # goes status graduated and monitoring1
StepManager.new(student).proceed # this will proceed the student in the 5th step which is monitoring2
StepManager.new(student).next_step # here will go completed

or to make my job easier with something like:

StepManager.new(student).proceed.proceed.proceed.proceed.proceed.proceed

or

StepManager.new(student).complete_student # which in background does the same thing

And by doing that I am sure I will never miss something. But then the tests wouldn't be so clear about what I am doing.

So should I replicate the logic or using my classes?

my App crashes unfortunately in android studio

App working in 1 phone and not working in other phone. It is working in nexus 23 emulator also but not working in other emulators

Laravel Dusk + Vue with ElementUI

I need help writting test with Laravel Dusk. I'm using Vue with ElementUI. I really like this framework, however, I can't use Dusk's build-in select() method in my tests.

It's because the select component of ElementUI does not generate a real <select> tag, instead, it creates a normal <input> (that is readonly) and, at bottom of the page, the popper with select's options, so, there is not a <select> tag in my page, only a <div> and a readonly <input>.

How can I write a test with Dusk that lets me click on a 'div'??

If I try to type on that input with something like this:

// $browser->select('my_select_id'); not working,
$browser->type('my_select_id', 1);

It throws me an Exception:

Facebook\WebDriver\Exception\InvalidElementStateException: invalid element state: Element must be user-editable in order to clear it.

So I don't know how to test ElementUI's selects :(

Please help,

Thx!

Writing Test Cases for androidTest Folder

public class DriveRepository {

    private List<Drive> driverList;

    public DriverRepository(Context context) {
        driverList = new ArrayList<>();
        driverList.add(Driver.create(context, "111", "H"));
        driverList.add(Driver.create(context, "12", "Ha"));
        driverList.add(Driver.create(context, "123", "K"));
        driverList.add(Driver.create(context, "1234", "A"));
    }

    public List<Driver> getDrivers() {
        return driverList;
    }
}

Can somebody please point out how to write androidTest case using JUNit for the above class. I tried searching but all i found was Espresso Eg, i can't use Mockito as i am writing instrumentation Test Case. Any help would be appreciated

create-react-app testing TypeError: cannot read property of undefined

In the project I've joined there's a problem with tests. A lot of test suites fail with an error "cannot read property 'xxx' of undefined" and quite a long stacktrace. I can console.log these variables outside of tests without any problem and sometimes they are even static objects.

Moreover, I can checkout the project 2 months back and tests in same shape are working, although many dependencies had changed since then. Any ideas welcome.

Error Messages with Robot framework in Eclipse

I'm testing a swing application with Robot Framework in Eclipse and have now updated the version to 3.0.2 in build.xml. Now, however, I get a bunch of error messages in the log, see below. How can I solve this?

 [java] Error in atexit._run_exitfuncs:
 [java] Traceback (most recent call last):
 [java]   File "C:\Users\mmemmel\.m2\repository\org\robotframework\robotframework\3.0.2\robotframework-3.0.2.jar\Lib\atexit.py", line 24, in _run_exitfuncs
 [java] Error in sys.exitfunc:
 [java] Traceback (most recent call last):
 [java]   File "C:\Users\mmemmel\.m2\repository\org\robotframework\robotframework\3.0.2\robotframework-3.0.2.jar\Lib\atexit.py", line 30, in _run_exitfuncs
 [java]   File "C:\Users\mmemmel\.m2\repository\org\robotframework\robotframework\3.0.2\robotframework-3.0.2.jar\Lib\traceback.py", line 232, in print_exc
 [java]   File "C:\Users\mmemmel\.m2\repository\org\robotframework\robotframework\3.0.2\robotframework-3.0.2.jar\Lib\traceback.py", line 125, in print_exception
 [java]   File "C:\Users\mmemmel\.m2\repository\org\robotframework\robotframework\3.0.2\robotframework-3.0.2.jar\Lib\traceback.py", line 69, in print_tb
 [java]   File "C:\Users\mmemmel\.m2\repository\org\robotframework\robotframework\3.0.2\robotframework-3.0.2.jar\Lib\linecache.py", line 14, in getline
 [java]   File "C:\Users\mmemmel\.m2\repository\org\robotframework\robotframework\3.0.2\robotframework-3.0.2.jar\Lib\linecache.py", line 40, in getlines
 [java]   File "C:\Users\mmemmel\.m2\repository\org\robotframework\robotframework\3.0.2\robotframework-3.0.2.jar\Lib\linecache.py", line 92, in updatecache
 [java]   File "C:\Users\mmemmel\.m2\repository\org\robotframework\robotframework\3.0.2\robotframework-3.0.2.jar\Lib\linecache.py", line 92, in updatecache
 [java] java.lang.OutOfMemoryError: PermGen space
 [java] java.lang.OutOfMemoryError: java.lang.OutOfMemoryError: PermGen space

jest: handle errors during setup of custom test environment

I want to run my integration tests using jest. To setup the database connection and what not, I'm also using a custom TestEnvironment through the jest's config option testEnvironment.

const NodeEnvironment = require('jest-environment-node');
const insert = require('../.db/insert-dummy-data');

class CustomEnvironment extends NodeEnvironment {
  async setup() {
    await super.setup();
    await insert();
  }

  async teardown() {
    await super.teardown();
  }

  runScript(script) {
    return super.runScript(script);
  }
}

If the insert() throws an error b/c the DB connection couldn't be established for example, the different test-suites fail, but the whole command hangs and doesn't exit. Just setup() is called but not teardown(). What is the right way to handle errors in the setup of the custom test-environment?

selenium chrome driver select certificate popup confirmation not working

I am automating tests using selenium chromewebdriver 3.7. Whenever I lauch the site, I get a certificate selection popup like the one belowenter image description here

However I am not able to click on the OK button. These are the options I have tried

 //I have tried getWindowHandle like this  
 String  handle= driver.getWindowHandle();
        this.driver.switchTo().window(handle);


//I have alos tried switching and accept
 driver.switchTo().alert().accept();


//I have also tried to force the enter key like this
 robot.keyPress(KeyEvent.VK_ENTER);
 robot.keyRelease(KeyEvent.VK_ENTER);


 // I also tried this way
 Scanner keyboard = new Scanner(System.in);
 keyboard.nextLine();

All my trials have failed. How can I click on OK on this popup window? This is the closest solution I found which is not working Link here

VSTS - Test Progress Reporting

I'd like to create a dashboard which shows test cases and the outcome over time. Ideally it would be a burndown chart which would include planned tests in the future.

At the moment it seems I can only create charts based on priority and state, and not on outcome.

If anyone has any tips or examples in this area that would be great.

c# how can i check if img file exist in div class

The problem is that I can not check if there is a file on the page. There is a code in which this image is present:

<div class="specials-block">

<a href="/ru/actions/443">
<div class="specials-block-item icons-guide ico1" data-amount="5" data-toggle="tooltip" data-html="true" data-placement="bottom" title="" data-original-title="">
</div>
</a>
<div class="specials-block-item">
<img src="data:image/gif;base64,iVBORw0KGgoAAAANSUhEUgAAAEsA......8X2F0UZvgYHv0AAAAASUVORK5CYII=">
</div>
</div>

And the code in which this image is missing:

<div class="specials-block">

<a href="/ru/actions/443">
<div class="specials-block-item icons-guide ico1" data-amount="5" data-toggle="tooltip" data-html="true" data-placement="bottom" title="" data-original-title="">
</div>
</a>
<div class="specials-block-item">

</div>
</div>

In order to check whether this image is present, I do a check:

var intelAtom = driver.FindElement(By.XPath("/html/body/div[5]/div[4]/div/div[1]/div[1]/div[5]/div[1]/div/img[@src='------------------------------']"));
   if (intelAtom.Displayed)
   {
       MessageBox.Show("All is OK");
   }
   else
   {
       MessageBox.Show("WTF O_o , displayed enother Icon");         
   }

but I need to check both the presence of the image and if the img dislayed that the desired image is displayed. Somthing like this:

if (driver.FindElement(By.XPath("/html/body/div[5]/div[4]/div/div[1]/div[1]/div[5]/div[1]/div/img")).Displayed)
{
    var intelAtom = driver.FindElement(By.XPath("/html/body/div[5]/div[4]/div/div[1]/div[1]/div[5]/div[1]/div/img[@src='------------------------------']"));
    if (intelAtom.Displayed)
    {
          MessageBox.Show("All is OK");
    }
    else
    {
          MessageBox.Show("WTF O_o , displayed enother Icon");
    }
 }
 else
 {
     MessageBox.Show("WTF O_o , Icon Intel Pentium is't displayed");
 }

But if I start second query, it was failed, because it didn't find img file

driver.FindElement(By.XPath("/html/body/div[5]/div[4]/div/div[1]/div[1]/div[5]/div[1]/div/img"))

.

Perhaps someone knows how to build the right query in order to determine whether the image was displayed on the page and the correct image was displayed

How to test a middleware with phpunit in laravel 5.5?

How could I test my middleware ? Here I am testing if a admin can access a middleware protected route, that returns a 500, if the user does not have a privileged ip - then the middleware returns a 401 (not authorized) when trying to access the /500 page .

My test:

use App\Http\Middleware\OnlyAdminIp;
use Illuminate\Http\Request;
use Tests\TestCase;
use Illuminate\Foundation\Testing\DatabaseTransactions;

class HttpTests extends TestCase
{
    use DatabaseTransactions;

   /** @test */
    public function if_a_500_page_returns_a_500_response_for_admin()
    {
        $request = Request::create(config('app.url') . '500', 'GET');
        $middleware = new OnlyAdminIp();
        $response = $middleware->handle($request, function () {});
        $this->assertEquals($response->getStatusCode(), 401);
    }
}

My middleware:

namespace App\Http\Middleware;

use App\IpList;
use Closure;

class OnlyAdminIp
{
    /**
     * Handle an incoming request.
     *
     * @param  \Illuminate\Http\Request $request
     * @param  \Closure $next
     * @return mixed
     */
    public function handle($request, Closure $next)
    {
        $client_ip = $_SERVER["HTTP_CF_CONNECTING_IP"] ?? $request->ip(); // CDN(Cloudflare) provides the real client ip, a safeguard is used to prevent critical error if CDN is removed/changed.
        $ipList = IpList::all()
            ->pluck('ip')
            ->toArray();
        if (!in_array($client_ip, $ipList)) {
            abort(401);
        }

        return $next($request);
    }
}

And just for more clarity - the 500 route (in web.php) .

Route::group(['middleware' => 'admin.ip'], function () {


    Route::get('500', function () {
        abort(500);
    }); 

});

With this setup I am getting Call to a member function getStatusCode() on null Thanks in advance !

Can't run Xunit tests on Visual Studio 17

Can't run all the XUnit tests on my Visual Studioi 2017. Version 15.6.4

They can be viewed in TestExploer and when I run them from there I get smth like this:

[27.03.2018 12:21:46 Informational] ------ Load Playlist started ------
[27.03.2018 12:21:46 Informational] ========== Load Playlist finished 
(0:00:00,0215566) ==========
[27.03.2018 12:22:27 Informational] Executing test method 'Lebara.Remittance.Test.ServiceImplementation.RiskEngineServiceTest.ShouldTest'
[27.03.2018 12:22:27 Informational] ------ Run test started ------
[27.03.2018 12:22:29 Warning] Multiple test adapters with the same uri 
'executor://xunit/VsTestRunner2' were found. Ignoring adapter 
'Xunit.Runner.VisualStudio.TestAdapter.VsTestRunner'. Please uninstall the 
conflicting adapter(s) to avoid this warning.
[27.03.2018 12:22:29 Warning] [xUnit.net 00:00:00.0209459] Skipping: 
Lebara.Remittance.Test (could not find dependent assembly 
'Microsoft.Extensions.DependencyModel, Version=1.1.0')
[27.03.2018 12:22:29 Warning] No test is available in C:\ReposNew\Lebara.Remittance\Lebara.Remittance\Lebara.Remittance.Test\bin\Debug\Lebara.Remittance.Test.dll. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.
[27.03.2018 12:22:29 Informational] ========== Run test finished: 0 run 
(0:00:02,1543479) ==========
[27.03.2018 12:34:19 Informational] Executing test method 'Lebara.Remittance.Test.ServiceImplementation.RiskEngineServiceTest.ShouldTest'
[27.03.2018 12:34:19 Informational] ------ Run test started ------
[27.03.2018 12:34:20 Warning] Multiple test adapters with the same uri 
'executor://xunit/VsTestRunner2' were found. Ignoring adapter 
'Xunit.Runner.VisualStudio.TestAdapter.VsTestRunner'. Please uninstall the 
conflicting adapter(s) to avoid this warning.
[27.03.2018 12:34:20 Warning] [xUnit.net 00:00:00.0200861] Skipping: 
Lebara.Remittance.Test (could not find dependent assembly 
'Microsoft.Extensions.DependencyModel, Version=1.1.0')
[27.03.2018 12:34:20 Warning] No test is available in C:\ReposNew\Lebara.Remittance\Lebara.Remittance\Lebara.Remittance.Test\bin\Debug\Lebara.Remittance.Test.dll. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.
[27.03.2018 12:34:20 Informational] ========== Run test finished: 0 run 
(0:00:00,7088116) ==========

I tried deleting %TEMP%\VisualStudioTestExplorerExtensions - nothing helped.

The thing is, several days ago I could run them. I did not change a thing. Just have no idea what is going on.

Also I had this warning

 [27.03.2018 12:22:29 Warning] Multiple test adapters with the same uri 
 'executor://xunit/VsTestRunner2' were found. Ignoring adapter 
 'Xunit.Runner.VisualStudio.TestAdapter.VsTestRunner'. Please uninstall the 
 conflicting adapter(s) to avoid this warning.
 [27.03.2018 12:22:29 Warning] [xUnit.net 00:00:00.0209459] Skipping: 
 Lebara.Remittance.Test (could not find dependent assembly 
 'Microsoft.Extensions.DependencyModel, Version=1.1.0')

Symfony 4 Tests error

When I run tests I have an error

enter image description here

But if I run it again without anything, error has gone

enter image description here

DATABASE_URL_ADM configured in .env , seems like symfony can't read file before runing test, because if I add sleep(3), then test running after 3 sec without error from the first time. Maybe I missed something important.

How to verify a mock interface in a test method using KotlinTest library?

I have an interface that communicates with my presenter who checks whether the fields of a form are valid.

My interface is:

interface MainView {
  fun showMessage(data: LoginEntity)
  fun showEmailError()
  fun showPasswordError()
}

My method in the presenter is like that:

fun sendForm(loginData: LoginDataPresentation, view: MainView) {
   if (isValid()) {
     view.showMessage(mapData(loginData))
   } else if (isValid()) {
     view.showPasswordError()
   } else {
     view.showEmailError()
   }
}

My test class with KotlinTest:

class LoginPresentationKtTest : StringSpec() {

  init {
    "given a bunch of Login Data should be matched successfully" {
       forAll(EmailGenerator(), PasswordGenerator(), { email: String, password: String ->

         val loginData: LoginDataPresentation(email, password)

         val mockMainView = mockMainView()

         sendForm(loginData, mockMainView())

       })
    }
  }

  private fun mockMainView(): MainView {
    //How to mock?????
  }
}

Using the KotlinTest library, is there any way to verify that the call to the showMessage method of the MainView class is done provided that the email and password generated is always correct? Is it possible to use a mock library like mockito?

QML: qmltestrunner and resource resolution

Is there a way to get qmltestrunner to use file based image resolution? The proper application loads resources from a separate directory resources (which is compiled as QT resource file), but when loading the application via qmltestrunner, this resource file is not available and I want references to be resolved using file paths. TL;DR: I'm referencing images as "/resources/imageXXX.png" in the application's QML file, and want these to resolve to the path in the resource file when running as application and to file based paths when running under qmltestrunner. I'm not using resource aliases or anything fancy.

elasticsearch snapshot to AWS s3 mock

for testing purpose I want to create elastic-search snapshot to S3 mock.

I use mock from https://www.npmjs.com/package/s3rver

I have repository-s3 6.2.2 plugin installed. When I try to register new snapshot:

PUT _snapshot/my_s3_repository
{
  "type": "s3",
  "settings": {
      "bucket": "rtbackup",
      "endpoint": "localhost:9495",
      "protocol": "http"
  }
}

I got "Unable to load credentials from service endpoint".

I would like to avoid of using any AWS services while testing. Any idea, how to force elastic to create snapshot in s3rver mock?

How to get tags in Behat FeatureContext

Is there a way how to get tags for a scenario in Behat FeatureContext in which the method is being run in?

my.feature @SPRF1 Scenario: My scenario Given something is done

FeatureContext class FeatureContext implements \Behat\Behat\Context\Context { /** * @Then something is done */ public function somethingIsDone() { $tags = $this->getScenarioTags(); // this doesn't exist } }

Android P Preview breaks UiAutomator tests with API compatibility error

I am trying out Android P preview on Pixel device, and currently experiencing an issue with instrumentation tests written with UiAutomator framework.

Whenever a button click is simulated with UiAutomator by following code:

onView(withId(R.id.button_activity_login)).perform(click())

I am encountering an AlertDialog with message

Detected problems with API compatibility (visit g.co/dev/appcompat for more info)

which leads to this link:

https://developer.android.com/preview/restrictions-non-sdk-interfaces.html#differentiating_between_sdk_and_non-sdk_interfaces

This breaks UiAutomator tests since my tests currently do not consider additional AlertDialog between each action.

This only happens with UiAutomator's button click, not with Espresso's. I believe that UiAutomator might use some reflections under the hood in order to achieve cross-app testing functionality (not knowing the UI component's texts or ids beforehand), whereas Espresso takes care of everything inside the app being tested.

This is somewhat weird since UiAutomator is a testing framework Google itself suggests in their developer sites (https://developer.android.com/training/testing/ui-automator.html#ui-automator-apis). Does anybody have experienced or solved the following issue?

lundi 26 mars 2018

Jmeters test standard

I am using Jemter to test my own web application with HTTP request. The final result seems okay. But I have one question are there any details of testing standard ? Because i am writing report which need the some data as reference.

For example, something like the connected time and loading speed should lower than XXXXms or sample time should bewteen XX and XX

I didn't find there is any references about this. So is there anyone knows about this which i can be used as reference data

Can't get rails tests to even run, UNIQUE constraint failed: reviews.id:

I'm beginning to look at testing a Rails app I've been working on and can't seem to get the default generated tests to even run. I think it has something to do with rails generating unique ids for fixtures.

The test I'm running is simply this: rails test test/controllers/pages_controller_test

Full error message: Error: PagesControllerTest#test_should_get_home: ActiveRecord::RecordNotUnique: SQLite3::ConstraintException: UNIQUE constraint failed: reviews.id: INSERT INTO "reviews" ("created_at", "updated_at", "id") VALUES ('2018-03-26 21:55:24.345073', '2018-03-26 21:55:24.345073', 980190962)

test/controllers/pages_controller_test

require 'test_helper'

class PagesControllerTest < ActionDispatch::IntegrationTest
  test "should get home" do
    get pages_home_url
    assert_response :success
  end

end

This is what reviews.yml looks like:

# Read about fixtures at 
http://api.rubyonrails.org/classes/ActiveRecord/FixtureSet.html

one:
  comment: MyText
  star: 1
  reservation_id: one
  tourist_id: one
  guide_id: one
  type:

two:
  comment: MyText
  star: 1
  reservation_id: two
  tourist_id: two
  guide_id: two
  type:

are we able to import and export functions in cypress.io?

there are few functions I want to use across the testing integrations in cypress.io is there a way to export / import the functions so I don't have to copy and paste the functions into each integration?

Thanks in advance for any advice

possible to load multiple fixtures in cypress.io?

I know that in cypress.io we can use fixture to import json and using them as objects by doing something like below...

cy.fixture('path/something.json').then((obj) => { do something } )

but this only importing one json what if I want to import multiple?

cy.fixture('path/something.json').then((obj) => { cy.fixture('path/something2.json').then((obj2) => { cy.log(obj); cy.log(obj2) } ) } )

I know something like above would work since I tried it but this will be too much if there are more than 2 I want to import.

Does anyone know possible way to do this?

Thanks in advance for any helps

Mocking Browser Environment in a Typescript Test

I have a set of typescript files that are being compiled and running a browser window.

I'd then like to write a set of tests in Typescript that would run in the Node environment, which would require mocking out any window APIs.

I can't find a clean way to do this with Typescript, often ending up with

Cannot find name 'location'.

I tried to define the location, but still had no joy as I declaring the interface and not an instance (which I'd like to stub in each test).

'location' only refers to a type, but is being used as a value here.

6 if (location.origin ===

Any ideas of how this could / should be done?

Cycle en v & le cycle iteratif et incremental

si on a un application de gestion de bibliotheque et on veut presenter les differentes etapes à faire avec un cycle iteratif et incrementale et dans chaque iteration un cycle en v. quelqu'un peut m'aidez.

Any serious differences between W7, W8, W8.1, W10 (Windows) for software development?

I have one project where some programs should run on virtual machine. Here is my question: is it critical to choose maybe Windows 7 instead of Windows 10 etc.?

Creating TestCaseMixin for Django, is that a safe method?

I want to speed up my Class Based Views tests. I wrote this small test-mixin. It seems to work pretty well and is actually doing all i need. Question to more experienced players here. Is that a method safe, without any actual drawbacks? ................................................................................................................................................................

class TemplateResponseTestMixin(object):
    """
    Basic test checking if correct response, template,
    form, is rendered for the given view.
    Can be used for any CBV that inherits from TemplateResponseMixin.
    """
    # required
    view_class = None
    url_name = ''

    # optional (all below)
    template_name = ''

    form_class = None
    csrf_token = True

    get_status_code = 200
    post_status_code = 405

    def setUp(self):
        self.client = Client()

    #############
    # SHORTCUTS #
    #############

    def get_response(self):
        """ returns response for given view """
        return self.client.get(reverse(self.url_name))

    #########
    # TESTS #
    #########
    def test_view_used(self):
        """ check if correct view is used to render response """
        resp = self.get_response()
        self.assertIsInstance(resp.context['view'], self.view_class)

    if template_name:
        def test_template_used(self):
            """ check if correct template is used to render response """
            resp = self.get_response()
            self.assertTemplateUsed(resp, self.template_name)

    if form_class:
        def test_form_used(self):
            """ check if correct form is used to render response """
            resp = self.get_response()
            self.assertIsInstance(resp.context['form'], self.form_class)

    if csrf_token:
        def test_if_csrf_token_is_used(self):
            """ check if csrf_token is used to render response """
            resp = self.get_response()
            self.assertIn('csrf_token', resp.context)

    if get_status_code:
        def test_get_response(self):
            """ check if we receive correct status_code for GET request """
            resp = self.get_response()
            self.assertEqual(resp.status_code, self.get_status_code)

    if post_status_code:
        def test_post_response(self):
            """ check if we receive correct status_code for POST request """
            resp = self.client.post(reverse(self.url_name), data={})
            self.assertEqual(resp.status_code, self.post_status_code)


################# THEN ##################

class RegistrationViewBasicTestCase(TemplateResponseTestMixin, SimpleTestCase):
    view_class = RegistrationView
    url_name = 'registration_form'
    template_name = 'registration/registration_form.html'
    form_class = RegistrationForm

    get_response_code = 200
    post_response_code = 200

Jasmine Unit testing issue with script tags - Angular api

i have a small crud application and i am using unit testing with jasmine but im having issues with my errors, after some work on it i realised the only api that is tested is the one that is loaded last so for example admins app below will be the only one working:

  <script src="../js/app.js"></script>
<script src="../js/PublishersApp.js"></script> 
<script src="../js/AuthorsApp.js"></script>
<script src="../js/AdminsApp.js"></script> // this will be working 
  <!-- include spec files here... -->
  <script src="../js/mock.requests.js"></script>
  <script src="../js/publisher.spec.js"></script>
  <script src="../js/authors.spec.js"></script>
  <script src="../js/admin.spec.js"></script>

does anyone know how to fix this? i am completely new to Jasmine so apologies if this is very simple

Sahi Pro version 7 - Unable to run the scripts

I have installed Sahi Pro 7 version and tried running the sample scripts which come with the installation. It worked perfectly fine with all the three browsers (Chrome, FireFox, IE). But when I try to run my own scripts it's getting a failure status. Please help me on this one. I need to run the scripts. I am 100% sure that there are no errors in the test scripts.

how to manage day light saving on azure

how to schedule web jobs with day light saving. we have schedule web jobs on azure to run service(15:00 UTC and 8:00 MST) but when day light saving start it run 9 AM MST. could you please tell me, how to manage fix it.

How to create load tests with jmeter when our website has no authorization with create user ... but it has authorization via social networks

How to create load tests with jmeter when our website has no authorization with create user ... but it has authorization via social networks?

Python - Test base Array (with a probabilities rule set)

I need to create a test base (an array) that follows some rule set.

Example: I have 4 cities: Naples (id=1), London(id=2), Rome (id=3), Milan (id=4).

I need to set some rules like the following: First city must be included in the test array with a probability from 40 to 50%. Second city must be included in the test array with a probability from 20 to 30%. Others cities have the same probability to be included in the array.

Could you help me?

Many Thanks.

How can I set the ng test code-coverage directory?

I'm running $ ng test -cc and using the coverage report nicely.

However, it appears in the src folder, which is skewing the results of Find in the src folder, which should just be the source folder, is there a way I can specify the desired location of the generated coverage directory, in tsconfig.spec.json or karma.conf.js perhaps?

How to call a running Ranorex test in a loop?

I have recorded & partly written a test for a Website, which works for a specific browser type. The user can modify a class field of the so-called EBrowserType type, which is an enum I have created. It contains all browser types that Ranorex can handle.

Now, I was asked to make a loop over the whole test, where all the browser types are called. I run into problems, as the existing test is a group of recordings, where the user clicked at some point into a text field of the opened browser of the requested browser type. This seems to be no more possible in a loop, as the code itself creates the browser & closes it after that.

In the original code, there is a SETUP part that openes the browser, and a recording that follows. enter image description here This recording is called SearchJobRegionRecording & starts with a mouse click into the search field of the browser. In the automatically created C# file, this looks as follows:

   [TestModule("c7957eb6-feec-4dce-aef3-6af20fa71b8b", ModuleType.Recording, 1)]
public partial class SearchJobRegionRecording : ITestModule
{
    /// <summary>
    /// Holds an instance of the IVMJobsiteTest.IVMWebsiteTestRepository repository.
    /// </summary>
    public static IVMJobsiteTest.IVMWebsiteTestRepository repo = IVMJobsiteTest.IVMWebsiteTestRepository.Instance;
    […]

    [System.CodeDom.Compiler.GeneratedCode("Ranorex", "8.0")]
    void ITestModule.Run()
    {
        Mouse.DefaultMoveTime = 0;
        Keyboard.DefaultKeyPressTime = 0;
        Delay.SpeedFactor = 100.00;

        Init();

        Report.Log(ReportLevel.Info, "Mouse", "Mouse Left Click item 'Home.Text' at 128;8.", repo.Home.TextInfo, new RecordItemIndex(0));
        repo.Home.Text.Click("128;8");
        […]
    }
}

As you can see, a repo object is required to access the browser instance. enter image description here enter image description here My question: How can I get the browser instance in my browser-looping code? The only hint about the created browser seems to be the process ID.

Here is the respective part for the browser-looping code:

    public void TestAllBrowsers()
{
    foreach (EBrowserType browser in Enum.GetValues(typeof(EBrowserType)))
    {
        foreach (Point size in sizes)
        {
            Report.Log(ReportLevel.Info, "Code", "Open with the " + browser + " browser of "
                       + size.X + '×' + size.Y + " size " + url);
            BaseCodeCollection.KillCurrentBrowser(browser);
            var height = (short) size.X;
            var width = (short) size.Y;

            int processID = BaseCodeCollection.OpenBrowser(height, width, url, browser, isVerbose);

            DetermineOriginalVacancies();

            EnterSearchWords(); // HERE, A RepoInfoItem or something like that should be passed so that a mouse click is possible.

            AnalyzeSearchResultsMethod();

            CloseBrowser();
        }

    }
}

Angular4 Testing Karma - Error: Can't resolve all parameters for RequestOptions: (?)

This is my spec.ts file. I'm stuck with the error Error: Can't resolve all parameters for RequestOptions: (?). I have imported all the providers necessary also. Can anyone please help me resolve this error? Thanks in advance.

import { async, ComponentFixture, TestBed, inject } from '@angular/core/testing';
import { ResetPasswordComponent } from './reset-password.component';
import { ConfigService } from './../config-service.service';
import {Http, Headers, ConnectionBackend, RequestOptions} from '@angular/http';

describe('ResetPasswordComponent', () => {
  // let component: ResetPasswordComponent;
  // let fixture: ComponentFixture<ResetPasswordComponent>;

   beforeEach(() => {
    TestBed.configureTestingModule({
      providers: [ResetPasswordComponent, ConfigService, Http, ConnectionBackend, RequestOptions]
    });
  });

  // beforeEach(async(() => {
  //   TestBed.configureTestingModule({
  //     declarations: [ ResetPasswordComponent ]
  //   })
  //   .compileComponents();
  // }));

  beforeEach(() => {
    fixture = TestBed.createComponent(ResetPasswordComponent);
    component = fixture.componentInstance;
    fixture.detectChanges();
  });

  // it('should create', () => {
  //   expect(component).toBeTruthy();
  // });

  it('should create', () => {
    expect('holaa').toBe('holaa');
  });

  it('Is Password Change Function Working', inject([ResetPasswordComponent], (reset:ResetPasswordComponent) => {
    expect(reset.simplyAFunction()).toBe(true);
  }));
});

Why is Castle Windsor trying to load dependencies from installers I'm excluding?

We have this ASP.NET Web API project which configures a Castle Windsor container on startup using a series of implementations of IWindsorInstaller.

The method in the API which does this looks like:

public static IWindsorContainer RegisterContainer(IWindsorInstaller settingsInstaller, InstallerFactory skipSettings,
    params Action[] onRegisterCallbacks)
{
    Container = new WindsorContainer();
    Container.AddFacility<StartableFacility>(x => x.DeferredStart());

    Container.Install(settingsInstaller); //Must be registered first!!!
    Container.Install(FromAssembly.InThisApplication(skipSettings));

    DependencyResolverService.Initialise(Container);

    onRegisterCallbacks
        .ToList()
        .ForEach(x => x());

    return Container;
}

I'm now trying to introduce a component test framework which uses Microsoft.Owin.Testing.TestServer to call the API but treat it as a black box, mocking any dependency which relies on external services.

I thought I'd be able to wire up my dependencies in my component test project by calling the above RegisterContainer method, but pass in an InstallerFactory that filtered out any installers with external dependencies. I'd register these later with mocks.

However, having done this I was getting System.IO.FileLoadException exceptions on the line Container.Install(FromAssembly.InThisApplication(skipSettings)) looking for a Microsoft.WindowsAzure.Storage assembly which is only required by one of the filtered installers.

So, trying to backtrack to see if I'd made a mistake, I changed my InstallerFactory to this:

public class SkipTestSettingsInstallerFactory : InstallerFactory
{
    public override IEnumerable<Type> Select(IEnumerable<Type> installerTypes)
    {
        return new List<Type>();
    }
}

By my understanding this should filter out all installers and thus cause the line Container.Install(FromAssembly.InThisApplication(skipSettings)) to do nothing. But no, same exception.

Now, of course I could reference Microsoft.WindowsAzure.Storage in my component test project, but I don't want to because to me that would undermine its purpose as an API black box. So what would be the best approach to essentially allowing two projects to share an IoC configuration whilst providing the scope to override specific registrations?

How can I test my server with an Australian web client?

I have an app running on a web server in the Heroku cloud. The server is in Ireland. The client is written using React.

Users from Australia (working with Windows/Firefox) report bugs that are seemingly caused by the long network delays (more than 500 milliseconds from Australia to Ireland!).

How can I rent a client computer in Australia so that I can connect to it via screen sharing, run a browser there and see for myself what happens when I use the app in Ireland?

Is there a cloud service in Australia that offers simple Windows boxes with remote desktop access?

dimanche 25 mars 2018

Grab a string to use in a .visit() call in cypress

I have a dom element that contains the string or a url that I would like to visit. I have labelled the dom element with a data attribute for easy reference.

enter image description here

Above where it says 'Create Topic' in bold is the string and in the console, you can see it has a data-test='topicUrl attribute.

I want to capture this string value so that I can visit the url a a later point.

I followed the docs on Variables and Aliases and tried

cy.get('[data-test="topicUrl"]').invoke('text').as('Url')

so that I could visit the page by using

cy.visit(this.Url)

But that doesn't work, it errors out with TypeError: Cannot read property 'Url' of undefined in the console.

How do I grab the text in a DOM element so that I can use it to visit a url at a later point?

Remove Column Name and Column Value with NULL Value

How can we remove columns and column value with null while fetching a single Record using hive:

Query Used: hive>select * from table1 limit 2; Name Value ABC 123 XYZ NULL

If i fetch data with the query : select * from table1 where Name='ABC'; //Output should be ABC 123 Name Value ABC 123

but if i use the query : select * from table1 where Name='XYZ' //Out put should be XYZ only with Name header only Name XYZ

How to use mock objects in php testing

I'm trying to learn how to test properly and am struggling to get my head around mocks in the scenario below. I don't seem to be able to mock a class.

The main class uses a number of component classes to build a particular activity. I can test the component on it's own and mock it correctly but when I try to integrate test within the main class it calls the real service not the mock service.

This is in a Laravel 5.5 app.

I have a base class:

class booking {

private $calEventCreator

    public function __construct(CalenderEventCreator $calEventCreator) {
       $this->calEventCreator = $calEventCreator;
    }
}

This is then extended by another class:

class EventType extends booking {

    //do stuff
}

The CalenderEventCreator relies on an external service which I want to mock.

class CalendarEventCreator {

    public function  __construct(ExternalService $externalService) {

        $this->externalService = $externalService;

    }
}

In my test I have tried to do the following:

public function test_complete_golf_booking_is_created_no_ticket()
{

    $this->booking = \App::make(\App\Booking\EventType::class);

    $calendarMock = \Mockery::mock(ExternalService::class);

    $calendarMock->shouldReceive([
        'create' => 'return value 1',
    ])->once();

    $this->booking->handle($this->attributes, 'booking');

}

But in trying to execute the test it's clear the ExyernalService is not using the mocked object.

I have tried re-arranging the code as follows:

$calendarMock = \Mockery::mock(Event::class);
    $calendarMock->shouldReceive([
        'create' => 'return value 1',
    ])->once();

    $this->booking = \App::make(\App\Booking\EventType::class);

    $this->booking->handle($this->attributes, 'booking');
}

and tried:

$this->booking = \App::make(\App\Booking\EventType::class, ['eventService'=>$calendarMock]);

But on each occassion the real service is called not the mock version

I'm learning this so apologies about fundamental errors but can someone explain how I should mock the external service correctly

Could you please recommend me an open source code coverage tool that support both PHP and JS?

Could you please recommend me an open source code coverage tool that support both PHP and JS?

Intellij insists on using Android JUnit instead of Gradle for tests

I have a multi-module gradle project that includes some core backend modules, as well as different application modules for different platforms. For example, one android app module, and one module that just runs as a CLI.

I have configured gradle to run tests by going to Build, Execution, Deployment -> Gradle -> Runner and both checking the Delegate IDE build/run actions to gradle checkbox and selecting Gradle Test Runner from the Run tests using dropdown.

I have refreshed Gradle multiple times, as well as invalidating caches / restart. Despite this, when I click the run icon in the gutter for tests, even in the non-android modules, Intellij creates an Android JUnit configuration to run the test. This ends up not rebuilding after changes are made.

This did not used to happen, though I'm not sure when it started behaving this way or what changed recently.

How can I force IJ to always use gradle? I'd rather have no Android JUnit available, even in the android module, than the current behavior.

iOS - app crashes after update, clean install is OK - how to debug this?

I have a problem with one of my apps. I have released major update, that also changed app bundle data and core functionality.

Users, that install this app version as an update, got a startup-crash. Clean installs seems to be correct (at least users with this problem says so, after deleting app and doing clean install, problem is gone).

From Apple crash log, the log is weird, because the stack trace is incorrect. It says, that crash is in method A that is called from B. However, method A is never called from B in my code.

Is there a way how to debug this? Can I somehow install (retrieve) previous version of app and test update with XCode debugger running? I dont have codes for the previous version, so I cannot do its build manually and test it.

How to automatically set breakpoints on my code (java)?

During the process of testing, is there any tool/way to set breakpoints automatically?

Test Engineer Quality assurance

In manual testing does test engineer first do smoke testing or not? What are the abstraction methods used in any automation project? How to execute 1st test case if there are 100 test cases in automation? How to compress the file if we have 3 files?

google maps road database

I am working on a small scalafx project where I'm using webview to show google map and draw lines and markers on the map. To demonstrate my application, I would like to have a test database, so I can simulate a car moving from point A to point B by road. So I need some database (or just a bunch of data, I can create a mysql database myself) of road paths, that can be used for this. Any help is appreciated, thank you!

Whats is the difference between Use Case testing and State transitional testing? [on hold]

In black box testing, use case testing and state transitional testing sounds almost same.. Can anyone elaborate the difference among these testing techniques.. Thanks!

How to Connect & test polycom IP Phone with FreeSwitch V1.9

Steps here: Required

  1. Freeswitch V1.9
  2. Intranet with at-least 3 available IP Ports
  3. Polycom IP Phone VVX411 Model (tested currently with, you can try with other Polycom phones)
  4. Android/iOS/Windows mobile Phone with free Lin SIP Phone installed and configured with Freeswitch

Rails 5.1/Devise 4.4 - Testing: undefined method `setup' for Object:Class (NoMethodError)

I am running into a "undefined method `setup' for Object:Class (NoMethodError)" when testing with Rails. I am getting this error when running any of my tests.

I am using the test setup from the Ruby on Rails Tutorial and in addition have included Devise::Test::IntegrationHelpers (for integration tests) and Devise::Test::ControllerHelpers (for controller tests) in my test files.

This is the error I am getting: .rbenv/versions/2.4.2/lib/ruby/gems/2.4.0/gems/devise4.4.3/lib/devise/test/controller_helpers.rb:30:in block in <module:ControllerHelpers>': undefined methodsetup' for Object:Class (NoMethodError)

This is the line from the controller_helpers file the error refers to:

module Devise
   module Test

      module ControllerHelpers
        extend ActiveSupport::Concern

        included do
           setup :setup_controller_for_warden, :warden
        end
         .
         .
         .

These are my test gems:

group :test do
   gem 'rails-controller-testing', '1.0.2'
   gem 'minitest-reporters',       '1.1.14'
   gem 'guard',                    '2.13.0'
   gem 'guard-minitest',           '2.4.4'
end

This is my test_helper.rb file:

ENV['RAILS_ENV'] ||= 'test'
require File.expand_path('../../config/environment', __FILE__)
require 'rails/test_help'
require "minitest/reporters"
Minitest::Reporters.use!

class ActiveSupport::TestCase
    fixtures :all 
end

This is one of my tests:

require 'test_helper'

class UsersControllerTest < ActionController::TestCase
  include Devise::Test::ControllerHelpers

 setup do
    @user = users(:one)
 end

 test "should get index" do
    get users_url
    assert_response :success
 end

 test "should get new" do
   get new_user_url
   assert_response :success
 end
  .
  .
  .
end

Any help with this issue would be appreciated. Thank you.

Integration tests in Angular application

I am a bit confused about thistest:

  describe('#getUsers', () => {
    it('should return an Observable<User[]>', () => {
      const dummyUsers: User[] = [
        new User(0, 'John'),
        new User(1, 'Doe')
      ];

      service.getUsers().subscribe(users => {
        expect(users.length).toBe(2);
        expect(users).toEqual(dummyUsers);
      });

      const req = httpMock.expectOne(`${service.API_URL}/users`);
      expect(req.request.method).toBe('GET');
      req.flush(dummyUsers);
    });
  });

I saw the similar examples many times, during learning about tests in the Angular applications.

If I see good, we are declaring an array of Users and then we are returning the same array in response of request.

Finally we are checking if the created array is the same as received. I cant understand the purpose, it looks really strange to me. What is the point of comparing the same array to the same array?

Shouldn't I make a real GET to the API and then check if there are elements in response?

JUnit with Kotlin - This class does not have a constructor

I'm trying to implement testing with JUnit4 in my Kotlin application (as kotlin.test seems to be nonexistent in my Kotlin Runtime Library, and I don't now how to get it).
However, I am encountering an error simply by using the Test annotation.
That's my code:

import junit.framework.*

class IntTest {
    @Test    <------------- This line
    fun test1() {

    }
}

On the specified line, Eclipse gives an error message: "This class does not have a constructor".
I don't understand what is the problem. Which class doesn't have a constructor, and why should it have one?

Job interview preparation recommendations needed: QA Software Tester

Could you suggest materials (free online books, link references, ...) for interview preparation for the position QA Software Tester?

samedi 24 mars 2018

Naming conventions for unit tests with complex setups

This is possibly a more general question about unit testing. Sometimes I need to test a scenario with a lot of inputs. Let's say there are even just 4 inputs, like

public bool SomeMethod(bool foo, bool bar, bool baz, bool bux)
{
   // ... 
}

and I want to do

Assert.IsFalse(SomeThing.SomeMethod(true, false, false, true));

I end up with a test name like

[TestMethod]
public void TrueFooFalseBarFalseBazTrueBuxReturnsSuccessTest()

Am I doing it wrong?

How to test react-native android and iOS app using appium and selenium through java?

I am going to test react-native android & iOS mobile app using appium and java selenium. But i am facing an issue that by using eclipse for android real device it is working fine, But i am unable to test iOS app.Please help me out with this.

How do i know if the test is correct?

I have this doubt in mind. If I want to create a program to make sure that my previous program does the increasing sorting of an array correctly. So doing it, I would not be directly creating the program I want to do ?.So my doubt is... How can I make sure that my program tests if the sort executed is correctly? and if I know the test is correct,I would not have directly created the program I wanted to create at the beginning?

I know it's strange and I hope you have understood the concept. It's like the concept who debug the debugger?

Jmeter load test - Testing asp.net login page in jmeter

I have to load test an asp.net web application login page. My scenario is like login in to web application and enter the field in search engine and click search on home page.

After recording the script in jmeter, my sampler is like 1) get request (login page) 2) post request (posting the credentials and click login) 3)and other samplers (after successful login).

My problem is it shows incorrect username password error(manually it is working) whenever i play the the script, i have parameterized the valid credentials, also did correlation(by seeing the post request i got know the fields they were posting) with event state validation, viewstate genearator, viewstate and hdnkey from get response(sampler 1) to my post request(sampler 2) and tried again, but i am getting the same error everytime.

Please let me know, what should be done to login successfully. So i can perform the load test on this asp.net application.i have came accross lots of sites for this issue but nothing solved it. Please help!