samedi 30 septembre 2017

codeception : Setting the "doctrine" pre-defined service is deprecated since Symfony 3.3 and won't be supported anymore in Symfony 4.0

My project is a Symfony 3.3.9 project with Doctrine ORM. I use codeception 2.3.6 with the module Doctrine2, I follow this article : http://ift.tt/1EuuTA1

My config of codeception is :

#tests/functional.suite.yml
actor: FunctionalTester
modules:
    enabled:
        - \Helper\Functional
        - PhpBrowser:
            url: http://localhost 
        - Symfony 
        - Doctrine2:
            depends: Symfony
            cleanup: true

When I run the suite of tests with this command

./vendor/bin/codecept run functional

The tests pass very well with Success, but deprecated messages are thrown :

Setting the "doctrine" pre-defined service is deprecated since Symfony 3.3 and won't be supported anymore in Symfony 4.0

When I remove the configuration of Doctrine2 module from functional.suite.yml

#tests/functional.suite.yml
actor: FunctionalTester
modules:
    enabled:
        - \Helper\Functional
        - PhpBrowser:
            url: http://localhost 
        - Symfony 

I have to remove the calls of $I->grabEntityFromRepository() in my tests classes, and the deprecated disappear

Does dependency injection mean no inline initialization?

If I follow dependency injection principle for a class design, I should make sure my class does not try to instantiate its dependencies within my class, but rather ask for the object through the constructor.

This way I am in control of the dependencies I provide for the class while unit-testing it. This I understand.

But what I am not sure is does this mean that a good class design which follows dependency injection principle means that its fields should never be initialized inline? Should we totally avoid inline initialization to produce testable code?

Django test: Setting cookie (Django 1.11+)

I have troubles setting cookie in Django test.

class Test_views(TestCase):

    @classmethod
    def setUpClass(cls):
        pwd = '12345'
        cls.c = Client()
        cls.user_1 = UserFactory(password=pwd)
        cls.c.login(username=cls.user_1.username, password=pwd)
        super(Test_views, cls).setUpClass()

    @classmethod
    def tearDownClass(cls):
        super(Test_views, cls).tearDownClass()

    def test_set_cookie(self):
        session = self.c.session
        session['mycookie'] = 'testcookie'
        session.save()
        response = self.c.get(reverse('homepage')) 
        ...

I print the cookies in the Views to be sure:

views.py

... 
def homepage(request):
        print(request.session.keys())
        ...

And indeed, the cookie mycookie doesn't exist.

Apparently, that's the right way to set cookie:

http://ift.tt/2x4qx3K

Call function, which randomize parameters for test, within decorators

I am in proccess of writing variety of tests. and most of them are look like this:

    @pytest.mark.parametrize("name, price, count, type, countPoints, device", [
        ('test_name1', 3001, 1, 'key', 1, CAR),
        ('test_name2', 3000, 167, '', 1, MOTO),
        # and so on, choosing different parameters
    ])
    def test_getItemTradePreferences(name, price, count, type, countPoints, appid):
            assert testing_funtion(price,count,type,countPoints,appid) == val_obj.getItemTradePreferences(name,price,count,type,countPoints)

then I thought it will be better not to type parameters manually but write a function which will use random and do it for me.

def generate_testing_values():
    return ['test_nameX', randint(0, 3000), randint(1, 1000), '', randint(1, 1000), choice([CAR, MOTO])]

and call test like this:

@pytest.mark.parametrize("name, price, count, type, countPoints, device", [
        generate_testing_values(),
        generate_testing_values(),
        generate_testing_values(),
        # can I call generate_testing_values() in loop?
    ])
    def test_getItemTradePreferences(name, price, count, type, countPoints, appid):
            assert testing_funtion(price,count,type,countPoints,appid) == val_obj.getItemTradePreferences(name,
price,
count,type,
countPoints)

But can I somehow call this generate_testing_values() in loop within decorator? I have not found the solution yet, if you know please share.

Thank you!

vendredi 29 septembre 2017

Cómo puedo hacer que BadBoy utilice Chrome como navegador default en lufar de Internet Explorer 8? [on hold]

No puedo empezar a ejecutar pruebas en este software debido a que mi aplicación no soporta el internet explorer 8, quiero cambiar el browser por defecto que utiliza BadBoy y no se como, ayuda!

How to click on an element based on its coordinates with selenium webdriver

So, the web app we're developing has a TV/PC mode, and i'm testing the ability to transit between these two modes. `

 def pc_to_tv(self):
        pc_to_tv = self.driver.find_element_by_xpath(
            'html/body/div[1]/div/topbar/header/div[2]/div[2]/div[1]/button[1]')
        pc_to_tv.click()

 def tv_to_pc(self):
      tv_to_pc = self.driver.find_element_by_xpath(
            'html/body/div[1]/div/topbar/header/div[2]/div[2]/div[1]/button[2]')
      tv_to_pc.click()`

The problem is, when i switch from pc to tv, the screen "zooms in", making the button appear in the same place it would be without the zoom. so, i can't click on the button with my 'tv_to_pc' method, 'cause instead on clicking on the actual button, it clicks where the button should be.

So, the solution i found was clicking on the button with coordinates, that way i'll actually click on the place i want, instead of clicking on an unclickable place like i was doing.

The thing is, i don't know how to do this, and need help on this matter.

How to test internal promise in NodeJS

I am trying to test a service that is calling another service which I am trying to spy on and evaluate it has been called. The issue is the test assertion is wrapped in a promise which is resolved in the method and not returned as in most tutorials. When using

stubResponse.rejects(rejectionMessage)**()**.catch((err)

I can get the code to run but the callback in the test executes before the call to the actual method so it results in false but the test is still green ?!

When using stubResponse.rejects(rejectionMessage).catch((err) I get

TypeError: stubResponse.rejects(...).catch is not a function

My question resumes to how can I test a call to a stubbed/spied/mocked internal service call inside a promise that is never exposed ? This cannot be a rare use case but cannot find any relevant results.

Relevant code:

service.js

const httpService = require('./requestService');
const apiaiService = require('./apiaiService');

const processMessagePostback = function postBackMessage(event) {
  let senderIdPresent = typeof event !== 'undefined' && typeof event.sender !== 'undefined' && typeof event.sender.id !== 'undefined';
  let messagePresent = typeof event !== 'undefined' && typeof event.message !== 'undefined' && typeof event.message.text !== 'undefined';

  if(senderIdPresent && messagePresent) {
    let senderId = event.sender.id;
    let message = event.message.text;

    apiaiService.getApiAiResponse(senderId, message)
        .then((res) => {
            logger.debug('Sending conversational reply',{message: message,apiaiResponse: res});
            httpService.sendMessage(senderId, res);
        })
        .catch((err) => {
            logger.error('Api Ai error',{error: err});
        })
  } else {
    logInvalidEvent(event);
  }
};

service.spec.js

const expect = require('chai').expect;
const sinon = require('sinon');
const proxyquire = require('proxyquire');
const stubHttpService = {};
const sinonLoggerSpy = {};
const postbackService = proxyquire('./postbackService',{'./requestService':stubHttpService, './apiaiService': stubApiAi, '../utils/logger':sinonLoggerSpy});

....
    it('processMessagePostback should log error given api ai error',() => {
    event.sender = {id:0};
    event.message = {text:'TEST'};
    let rejectionMessage = "REJECTED_TEST";

    let stubResponse = sinon.stub();
    stubApiAi.getApiAiResponse = stubResponse;

    stubResponse.rejects(rejectionMessage)().catch((err) => {
        sinon.assert.calledOnce(sinonLoggerSpy.error);
    });

    postbackService.processMessagePostback(event);
});

Iperf3 Optimum Settings For Testing General Browsing Performance

Iperf3 is a great tool and is seen as the gold standard in terms of testing network performance however its enormous scope of functionality makes it difficult to configure for specific use cases.

What are the best settings to configure at both client and server end to most closely test general browser performance and specifically offer a reasonably direct comparison to the major browser test sites such as speedtest.com etc?

SoapUI responses comparison and integrating with excel sheet

I have a requirement where,

  1. SoapUI should trigger two environments for the same input data
  2. Compare both output data (output data will be either in xml or json format)
  3. If any differences are there, They should be written to an excel sheet.

What approach, I mean, what technologies we can use to implement this? How best we can implement this requirement. Please input your thoughts and suggestions.

import returns undefined instead of react component while testing

i'm in the middle of creating test for my app and have a problem with running them in Jest. My code structure looks like that:

<pre>
./src/
├── actions
│   ├── departments.js
│   ├── departments.test.js
├── actionTypes
│   ├── departmentsTypes.js
├── components
│   ├── common
│   │   ├── Form
│   │   │   ├── Form.css
│   │   │   ├── FormElement
│   │   │   │   ├── FormElement.css
│   │   │   │   ├── FormElement.js
│   │   │   │   ├── FormElement.test.js
│   │   │   │   └── __snapshots__
│   │   │   ├── Form.js
│   │   │   ├── Form.test.js
│   │   │   ├── index.js
│   │   │   └── __snapshots__
│   │   │       └── Form.test.js.snap
│   │   ├── index.js
│   │   ├── SchemaFormFactory
│   │   │   ├── SchemaFormFactory.js
│   │   │   ├── SchemaFormFactory.test.js
│   │   │   └── __snapshots__
│   │   │       └── SchemaFormFactory.test.js.snap
│   │   └── TextInput
│   ├── DepartmentSelector
│   │   ├── DepartmentSelector.css
│   │   ├── DepartmentSelector.js
│   │   ├── DepartmentSelector.test.js
│   │   ├── index.js
│   │   └── __snapshots__
│   ├── index.js
│   ├── MainApp
│   │   ├── index.js
│   │   ├── MainApp.css
│   │   ├── MainApp.js
│   │   ├── MainApp.test.js
│   │   └── __snapshots__
├── containers
│   ├── DepartmentForm
│   │   └── DepartmentForm.js
│   ├── DepartmentsSearch
│   │   ├── DepartmentsSearch.js
│   │   ├── DepartmentsSearch.test.js
│   │   ├── index.js
│   │   └── __snapshots__
├── helpers
│   └── test-helper.js
├── index.js
├── reducers
│   ├── departments.js
│   ├── departments.test.js
</pre>

And i'm trying to run a test for FormElement (FormElement.test.js) component. Inside test I have an import statement:

import DepartmentsSearch from '../../../../containers/DepartmentsSearch/DepartmentsSearch'

and my DepartmentSearch is a container that uses connect from react-redux library.

import DepartmentSelector from '../../components/DepartmentSelector/DepartmentSelector'
import {createDepartment} from '../../actions'

const mapStateToProps = (state) => {
  return {
    departments: state.departments
  }
}

export default connect(mapStateToProps, {createDepartment})(DepartmentSelector)

For some reason import DepartmentSelector returns undefined instead of react component (it's just a dumb component not a container). Weirdest thing is that happens only when running tests not when running code in browser. I've tried to use top level imports at the beginning but it was failing as well. import {DepartmentSelector} from '../../components'

I have no other ideas why it might be failing only while testing and will be glad if someone could point me in the right direction.

[Vue warn]: Error in render function: "TypeError: Cannot read property 'theme' of undefined"

I am building an app in Vuejs using the Quasar framework. Automated testing is required in the project. The problem is Quasar documentation isn't very specific when it comes to configure a testing environment. Now, I have tried to install karma and integrate it to the project however an error keeps popping up whenever I try to $mount a vue component:

[Vue warn]: Error in render function: "TypeError: Cannot read property 'theme' of undefined"

The following error happens when I try to karma start and run this simple test.

describe('Main.vue', () => {
  it('should render correct contents', () => {
    const vm = new Vue(Main).$mount()
    console.log(vm.$el)
    expect(vm.$el.querySelector('.hello h1').textContent).to.equal('Welcome to Your Vue.js App')
  })
})

My Main.vue component:

import Vue from 'vue'
import Quasar, * as All from 'quasar'
import router from './router'
import store from './store/index.js'

Quasar.start(() => {
  new Vue({
    el: '#app',
    router,
    store,
    render: h => h(require('./App'))
  })
})

Has anyone ever experienced this issue and knows how to solve it? The versions I am using are "quasar-framework": "^0.14.1", "vue": "~2.3.4"and "karma": "^1.4.1".

Testing Laravel Eloquent Relationship

I'd like to know what's the best approach to ensure that my Activity model belongs to a User.

Activity.php

...

public function user()
{
   return $this->belongsTo(User::class);
}

How can I write a test that ensure that the Activity model has that relationship?

NancyFx testing: The type or namespace Bootstrapper cannot be found

I'm just working with Nancyfx in C#, but i'm just restart to work in last week on c# and probably i'm missing a lot of concept related to dot.net and c#, will study about those in this weekend

Trying to create some test for a nancy module (i use Nunit and helper test class of NancyFx) i have the problem that those tests have to use the Path of the project for which i write the test for(that is for example i write those test in TestProject and i'm testing SelfHostNancyProject). So to resolve this problem definitively (actually i manually copied the Views directory of project SelfHostNancyProject in the Views directory of TestProject. But copying around those dir/files even with an automatically post-build command is not the best way.

I discover this solution http://ift.tt/2xC4BRJ but copying the code of class TestingRootPathProvider inside my code i have the error

The type or namespace Bootstrapper cannot be found

But hanging around with google i can't find a solution. I see that seems something related to app inizialization What are Bootstrappers/Bootstrapping in C#

But in any case i don't understand which import i have to do (if any) to resolve the problem or may be there is something i'm missing.

NodeJS Sinon Spy not evaluating as called although it is

I am trying to test a service that is calling another service which I am trying to spy on and evaluate it has been called. When stepping through with debugger I can see that it as been called, I can see the spy.called set to true with proper call arguments but when it returns it evaluates to false. I do not understand what I am doing wrong. Could anybody please help ?

service.js

const httpService = require('./requestService');

const processGreetingPostback = function postBackGreeting(event) {
    httpService.getClientName(senderId)
            .then((resp) => {
                let bodyObject = JSON.parse(resp);
                let reply = `Hi ${bodyObject.first_name}. ${greetingMessage}`;
                logger.debug('Replying: ', {senderId: senderId, message: reply});
                httpService.sendMessage(senderId, reply);
            })
            .catch((err) => {
                logger.error('Error in getting user name', {error: err});
                httpService.sendMessage(senderId, greetingMessage);
            }); 
};

service.spec.js

const expect = require('chai').expect;
const sinon = require('sinon');
const proxyquire = require('proxyquire');
const stubHttpService = {};
const sinonLoggerSpy = {};
const postbackService = proxyquire('./postbackService',{'./requestService':stubHttpService,'../utils/logger':sinonLoggerSpy}); 
....
it('should call httpService without user name',()=>{
    event.sender = {id:1234};
    event.postback = {payload:"Greeting"};
    stubHttpService.getClientName = sinon.stub().rejects('no user');
    stubHttpService.sendMessage = sinon.spy();

    postbackService.processGreetingPostback(event);

    //neither chai nor sinon return true
    expect(stubHttpService.sendMessage.called).to.equal(true);
    sinon.assert.calledOnce(stubHttpService.sendMessage);
    expect(stubHttpService.sendMessage.getCall(0).args[0]).to.equal(1234);
});

PhpUnit and Symfony: Mock Service not working

I'm using Symfony 3.3 and PHPUnit 5.7 and I'm trying to mock a service for testing an api controller.

The controller:

class ApiTestManager extends BaseApiController{
    public function getAction(): View
    {
        $response = $this->get('app.business.test_api')->getResponse();
        return $this->view($response);
    }}

The test class:

class ApiTestManagerTest extends WebTestCase {
public function testApiCall()
{
    $client = static::createClient();

    $service = $this->getMockBuilder(ApiTestManager::class)
        ->disableOriginalConstructor()
        ->setMethods(['getResponse'])
        ->getMock()
        ->expects($this->any())
        ->method('getResponse')
        ->will($this->returnValue(new Response()));

    $client->getContainer()->set('app.business.test_api', $service);
    $client->request('GET', 'de/api/v1/getResponse');

    $this->assertEquals(200, $client->getResponse()->getStatusCode());
}}

I've spent hours in trying to find the mistake, but everytime I execute this test it gives me following error:

Error: Call to undefined method PHPUnit_Framework_MockObject_Builder_InvocationMocker::getResponse()

Can anyone tell me whats wrong with my code? Thanks :)

How to disable geolocation in Opera with C#?

world!

Need help! Selenium, C#, Opera48. How to disble geolocation in Opera when test is run?

case browser_Opera:
OperaDriverService service = 
OperaDriverService.CreateDefaultService(@"C://Windows/"); //path to 
OperaDriver
OperaOptions options = new OperaOptions();
options.BinaryLocation = @"C://Program Files/Opera/launcher.exe"; //path to 
my Opera browser                                                         

//not working                                        
options.AddUserProfilePreference("Enable geolocation", 
false);
options.AddLocalStatePreference("Enable geolocation", false);

driver = new OperaDriver(service, options);     

jeudi 28 septembre 2017

Sequential test scenarios for Jest

I've been starting to use create-react-app for my React projects, and as a testing library it comes with Jest.

As part of my React apps, I like to create integration tests (as well as unit tests) as I find it useful to be able to check the happy paths in the app are working as expected. For example: Render (mount) a page using Enzyme, but only mock out the http calls (using Sinon) so that the full React/Redux flow is exercised at once.

Before using create-react-app, I've used Mocha for testing, and I found mocha-steps to be a great extension for integration tests, as it allows tests in a group to be executed in sequence, and handles stopping if a step fails without stopping the entire test run.

Question: Is there any way to get Jest to behave in a similar way? Specifically, I'd like to be able to specify a series of tests in a group (such as in a describe) and have them execute sequentially in order.

I've been looking through the Jest docs, or for any other libraries that extend it, but I've come up empty. For now, it feels like the only option is to have one large test which makes me sad, and I'd prefer not to swap out Jest for Mocha if I can avoid it.

Thanks!

Testing servlets with spring beans and embedded jetty

What is the best way to setup embedded jetty to use Spring test context for unit tests? We are having problems testing legacy servlets which were sprinkled up with Spring beans like this:

public class MyServlet extends HttpServlet {

  @Autowired
  private MachineDao machineDao;
  void init() {
     SpringBeanAutowiringSupport.processInjectionBasedOnServletContext(this, this.getServletContext());
  }
  ...

We are using embedded jetty (v9.2) to start Server with a particular servlet to test in spring test.

@ClassRule
public static TestServer server = new TestServer(MyServlet.class);

Problem is that spring test context is not accessible in jetty so processInjectionBasedOnServletContext(...) fails.

We also want to test that .jsp is rendered correctly, so using just new MyServlet() and MockHttpServletRequest & MockHttpServletResponse approach is not really a viable option.

We are using Spring 4.11 (not MVC) with java config & annotations.
Any help much appreciated!

How to increase goto timeout for page load?

tl;dr;

I need to increase this timeout:

COMMAND | Command `test` ended with an error after [32.585s]
COMMAND | BackstopExcpetion: Test #1 on undefined: GotoTimeoutError: goto() timeout


Consider following server code:

const http = require('http');

const hostname = '0.0.0.0';
const port = 3000;

http.createServer((req, res) => {
  res.statusCode = 200;
  res.setHeader('Content-Type', 'text/html');
  res.end(`<!DOCTYPE html>
    <title>Test Page</title>
    <script>console.log("Hello world!")</script>
    <h1>Hello world!</h1>
  `);
}).listen(port, hostname, () => {
  console.log(`Server running at http://${hostname}:${port}/`);
});

and BackstopJS configuration with 4 * 3 = 12 tests:

{
  "id": "backstop_default",
  "viewports": [
    { "label": "phone_portrait",     "width":  320,   "height":  480 },
    { "label": "phone_landscape",    "width":  480,   "height":  320 },
    { "label": "tablet_portrait",    "width":  768,   "height": 1024 },
    { "label": "tablet_landscape",   "width": 1024,   "height":  768 }
  ],
  "scenarios": [
    { "label": "Test #1",   "url": "http://localhost:3000/",   "selectors": ["body"] },
    { "label": "Test #2",   "url": "http://localhost:3000/",   "selectors": ["body"] },
    { "label": "Test #3",   "url": "http://localhost:3000/",   "selectors": ["body"] }
  ],
  "paths": {
    "bitmaps_reference": "backstop_data/bitmaps_reference",
    "bitmaps_test": "backstop_data/bitmaps_test",
    "engine_scripts": "backstop_data/engine_scripts",
    "html_report": "backstop_data/html_report",
    "ci_report": "backstop_data/ci_report"
  },
  "report": ["browser"],
  "engine": "chrome",
  "engineFlags": []
}

When I start backstop test, everything works fine:

D:\Temp\Supertemp\server-delay>backstop test
BackstopJS v3.0.22
Loading config:  D:\Temp\Supertemp\server-delay\backstop.json

COMMAND | Executing core for `test`
createBitmaps | Selcted 3 of 3 scenarios.
Starting Chromy: port:9222 --disable-gpu,--force-device-scale-factor=1,--window-size=320,480
Starting Chromy: port:9223 --disable-gpu,--force-device-scale-factor=1,--window-size=480,320
Starting Chromy: port:9224 --disable-gpu,--force-device-scale-factor=1,--window-size=768,1024
Starting Chromy: port:9225 --disable-gpu,--force-device-scale-factor=1,--window-size=1024,768
Starting Chromy: port:9226 --disable-gpu,--force-device-scale-factor=1,--window-size=320,480
Starting Chromy: port:9227 --disable-gpu,--force-device-scale-factor=1,--window-size=480,320
Starting Chromy: port:9228 --disable-gpu,--force-device-scale-factor=1,--window-size=768,1024
Starting Chromy: port:9229 --disable-gpu,--force-device-scale-factor=1,--window-size=1024,768
Starting Chromy: port:9230 --disable-gpu,--force-device-scale-factor=1,--window-size=320,480
Starting Chromy: port:9231 --disable-gpu,--force-device-scale-factor=1,--window-size=480,320
9224 LOG >  Hello world!
9227 LOG >  Hello world!
9226 LOG >  Hello world!
9222 LOG >  Hello world!
9223 LOG >  Hello world!
9225 LOG >  Hello world!
9228 LOG >  Hello world!
9231 LOG >  Hello world!
9229 LOG >  Hello world!
9230 LOG >  Hello world!
Starting Chromy: port:9232 --disable-gpu,--force-device-scale-factor=1,--window-size=768,1024
Starting Chromy: port:9233 --disable-gpu,--force-device-scale-factor=1,--window-size=1024,768
9232 LOG >  Hello world!
9233 LOG >  Hello world!
      COMMAND | Executing core for `report`
      compare | OK: Test #1 backstop_default_Test_1_0_body_0_phone_portrait.png
      compare | OK: Test #1 backstop_default_Test_1_0_body_1_phone_landscape.png
      compare | OK: Test #1 backstop_default_Test_1_0_body_2_tablet_portrait.png
      compare | OK: Test #1 backstop_default_Test_1_0_body_3_tablet_landscape.png
      compare | OK: Test #2 backstop_default_Test_2_0_body_0_phone_portrait.png
      compare | OK: Test #2 backstop_default_Test_2_0_body_1_phone_landscape.png
      compare | OK: Test #2 backstop_default_Test_2_0_body_2_tablet_portrait.png
      compare | OK: Test #2 backstop_default_Test_2_0_body_3_tablet_landscape.png
      compare | OK: Test #3 backstop_default_Test_3_0_body_0_phone_portrait.png
      compare | OK: Test #3 backstop_default_Test_3_0_body_1_phone_landscape.png
      compare | OK: Test #3 backstop_default_Test_3_0_body_2_tablet_portrait.png
      compare | OK: Test #3 backstop_default_Test_3_0_body_3_tablet_landscape.png
       report | Test completed...
       report | 12 Passed
       report | 0 Failed
       report | Writing browser report
       report | Browser reported copied
       report | Copied configuration to: D:\Temp\Supertemp\server-delay\backstop_data\html_report\config.js
      COMMAND | Executing core for `openReport`
   openReport | Opening report.
      COMMAND | Command `openReport` sucessfully executed in [0.114s]
      COMMAND | Command `report` sucessfully executed in [0.182s]
      COMMAND | Command `test` sucessfully executed in [8.495s]

But now let's make server delay 40 seconds before sending a page:

const http = require('http');

const hostname = '0.0.0.0';
const port = 3000;

http.createServer((req, res) => {
  setTimeout(() => {
    res.statusCode = 200;
    res.setHeader('Content-Type', 'text/html');
    res.end(`<!DOCTYPE html>
      <title>Test Page</title>
      <script>console.log("Hello world!")</script>
      <h1>Hello world!</h1>
    `);
  }, 40000);
}).listen(port, hostname, () => {
  console.log(`Server running at http://${hostname}:${port}/`);
});

and run backstop test:

D:\Temp\Supertemp\server-delay>backstop test
BackstopJS v3.0.22
Loading config:  D:\Temp\Supertemp\server-delay\backstop.json

COMMAND | Executing core for `test`
createBitmaps | Selcted 3 of 3 scenarios.
Starting Chromy: port:9222 --disable-gpu,--force-device-scale-factor=1,--window-size=320,480
Starting Chromy: port:9223 --disable-gpu,--force-device-scale-factor=1,--window-size=480,320
Starting Chromy: port:9224 --disable-gpu,--force-device-scale-factor=1,--window-size=768,1024
Starting Chromy: port:9225 --disable-gpu,--force-device-scale-factor=1,--window-size=1024,768
Starting Chromy: port:9226 --disable-gpu,--force-device-scale-factor=1,--window-size=320,480
Starting Chromy: port:9227 --disable-gpu,--force-device-scale-factor=1,--window-size=480,320
Starting Chromy: port:9228 --disable-gpu,--force-device-scale-factor=1,--window-size=768,1024
Starting Chromy: port:9229 --disable-gpu,--force-device-scale-factor=1,--window-size=1024,768
Starting Chromy: port:9230 --disable-gpu,--force-device-scale-factor=1,--window-size=320,480
Starting Chromy: port:9231 --disable-gpu,--force-device-scale-factor=1,--window-size=480,320
      COMMAND | Command `test` ended with an error after [32.585s]
      COMMAND | BackstopExcpetion: Test #1 on undefined: GotoTimeoutError: goto() timeout
9230 LOG >  Hello world!
9228 LOG >  Hello world!
9222 LOG >  Hello world!
9224 LOG >  Hello world!
9227 LOG >  Hello world!
9231 LOG >  Hello world!
9223 LOG >  Hello world!
9225 LOG >  Hello world!
9229 LOG >  Hello world!
9226 LOG >  Hello world!

As you see, there is an error

COMMAND | Command `test` ended with an error after [32.585s]
COMMAND | BackstopExcpetion: Test #1 on undefined: GotoTimeoutError: goto() timeout

but after that we can see console.log executed on the page. Backstop starts infinite waiting for something... Anyway, it's not the question.

I'm interested in option to enlarge 30 seconds timeout as in some cases dev server needs more time to serve the content. How can I configure it?

How to test an method that has like parameter the mock name class

I have one method like this

public function prePersist(LifecycleEventArgs $event)
{
    $em = $event->getEntityManager();
    $entity = $event->getObject();
    $metadata = $em->getClassMetadata(get_class($entity));

}

I've tested getEntityManager and getObject methods, but it is time to test getClassMetadata method and each parameters, in this case, it only one !

get_class($entity)

the above line returns name class (random) :

Mock_ObjectManager_126b0394
Mock_ObjectManager_cc9f593f
Mock_ObjectManager_8e119a34

it never returns real name class... and I want check the first parameter set when getClassMetadata is called.

    $test = $this;
    $this->em->expects($this->at(0))
             ->method('getClassMetadata')
             ->with(
                $this->callback(function($arg) use ($test) {
                 $test->assertThat($arg, 
                  $this->logicalAnd(
                   $this->equalTo('ObjectManager')
                  )
                 );//assertThat
                return true;
              }) // callback                                    
             )
             ->willReturn($this->objectManager);

How test it ?

Sphinx Doctest Fails Due to NameError

I have the following simple example that fails when I run make doctest.

def my_func():
    '''
    Dummy test function. Returns the number 5.
    .. doctest::

      >>> my_func()
      5
    '''
    return 5

The output is shown below.

Document: foo
-------------
**********************************************************************
File "foo.rst", line 7, in default
Failed example:
    my_func()
Exception raised:
    Traceback (most recent call last):
      File "C:\Apps\Anaconda2\lib\doctest.py", line 1315, in __run
        compileflags, 1) in test.globs
      File "<doctest default[0]>", line 1, in <module>
        my_func()
    NameError: name 'my_func' is not defined
**********************************************************************
1 items had failures:
   1 of   1 in default
1 tests in 1 items.
0 passed and 1 failed.
***Test Failed*** 1 failures.

Doctest summary
===============
    1 test
    1 failure in tests
    0 failures in setup code
    0 failures in cleanup code
build finished with problems, 10 warnings.
Testing of doctests in the sources finished, look at the results in _build\doctest\output.txt.

It looks like somehow Sphinx isn't recognizing that my_func() is a function in the module foo. I'm not sure how to resolve this. What has me particularly confused is that if I run python -m doctest -v foo.py, it passes.

Can anyone offer any insight here?

Testing socket.io with mocha

io.sockets.on('connection', function (socket) {
     // getSessionID is parsing socket.request.headers.cookie for sid
     let sessionID = getSessionID(socket);            
});

i'm using socket.request.headers.cookie to get the session id and then mapping that to socket id....

So my problem is when i'm running mocha tests i won't have any session or cookies set. I really don't want to modify my server to cater to the test i.e. passing through a fake session ID as query.

I was thinking of using selenium but would it be overkill is there a simpler approach.

how can I target a using capybara?

This is part of my view ...

  <input type="image" id="clickAction" src="<%= @bookSurveyImage %>" alt="Take Survey"%>"/>
<% end %>

</body>
</html>

I want to be able to target that input, to be redirect to another url. How can I do this ? Please help

Below are some of the things that I have tried.

describe "landings", :type => :feature do
it "index: get all params" do
visit '/sample_page?'

expect(page).to have_no_content 'Sample text'
expect(page).to have_no_content 'Example'
expect(page).to have_no_content("Some Content")

expect(page).to have_css("img", :maximum => 1)

# expect(page).to have_css("input", :id => "sampleID")
# get :survey_link
# expect(response).to render_template(:survey_link) #
# find("#clickAction").click
# input[@type='image']
# click_input

# 

# visit 'https://url........'

end end

Sphinx Doctest Not Finding Tests

I'm trying to figure out how to get doctest working with Sphinx, but it doesn't seem to be finding my tests. I have the following simple example.

def my_func():
    '''
    Dummy test function. Returns the number 5.
    .. doctest::

    >>> my_func()
    5
    '''
    return 5

When I run make doctest, the output tells me that there were zero tests. I'm pretty sure I have things configured correctly because if I run make html and then go to my index.html file, I see the function my_func() included in the documentation.

Am I overlooking something simple here? Thanks for your help.

How can I get the argument(s) of a stub in sinon and use one of the arguments + other data for the return value of a particular stub call

What I am trying to achieve is to stub a call which will return a certain value. That return value consists of one of the passed parameters and a new value.

How can I grab the argument of a stub and use it to form a return value for a given stub call

E.g.

mockDb.query.onCall(0).return(
   Tuple(this.args(0), "Some other data");
);

I know I can do this:

sinon.stub(obj, "hello", function (a) {
    return a;
});

However, this works on the entire stub and not an individual stub call. Unfortunately, I am not able to provide a different stub for different calls, as I have just one object(the db stub).

how to unit test this.getView().setModel()

I have SAPUI5 app that I'm trying to unit test using QUnit.

The code I'm trying to test is the following.

myFunction: function(flag, obj){
    var myModel = new sap.ui.model.json.JSONModel();

    if(flag){
        pricingModel.loadData("filePath", null, false);
    }
    ....
   obj.getView().setModel(myModel, "myModel");
   }

this function is located on a different file than my controller and is called from my controller like this.

  ...
  previous logic sets flag = true;
   ... 
   dataConnector.myFunction(flag, this); //where 'this' is the reference to he Main this of the application

Here is my unit test snippet

       QUnit.test("Valid NSN Input", function(assert) {
            // Arrange  
            var validInputEvent = {
                getParameters: sinon.stub().returns({ value: '306018040' })
            }; 

            var getValue = sinon.stub().returns('306018040');
            var setValue = sinon.spy();
            var apply = sinon.stub();                


            var controls = {


                dataConnectionFlag: true,

                nsnSearchInput : {
                    getValue: getValue,
                    setValue: setValue
                },
            };

            //Checking onFilterNSN when input has a valid value
            mainController.onSearch(validInputEvent, controls);

So to clarify everything above, I'm calling a function located on file.js from controller.js and passing in the 'this' reference so I can set my data model with that other file. Everything works fine when I run the application but the unit test fails as it doesn't know what the 'this' reference is. Is there a way to make this test work?

Jest. Change default export of imported module

I have a file config.js which looks and pretty much this:

import publicConfig from './prod.keys';

const defaultConfig = {
   //...some default config
};

export default publicConfig || defaultConfig;

and prod.keys.js

// SOCIAL API KEYS FOR PRODUCTION
export default null;

Now I wonder if I can test that if someone adds a configuration to prod.keys.js as a default export then config.js file will return that config instead of default one?

Or maybe I shouldn't be testing things like this?

Adding reference to Nunit Test project - "operation could not be completed" error

I use VS 2017. I have created Nunit.Test project, and I want to add reference to my main project, of which methods I want to test. When I am trying "Add > Reference" an error occurs, with dialog:

The operation could not be completed.

I tried to check versions of Nunit or reinstall it, but it did't help. Some ideas?

Failed: Template parse errors: Angular 2

I'm getting the following error once running Unit-test angular CLI. The following error is on a form.

Failed: Template parse errors:
There is no directive with "exportAs" set to "ngForm" ("<form [ERROR ->]#userForm = "ngForm" (ngSubmit)="onSubmit(userForm.value)">
    <div class="form-group">
        <label for="e")

The html form

<form #userForm = "ngForm" (ngSubmit)="onSubmit(userForm.value)">
<div class="form-group">
        <label for="exampleInputCountry">Country</label>
        <input class="form-control" id="country" placeholder="Country" ng name="country" ngModel>
    </div>
    <button type="submit" class="btn btn-primary">Submit</button>
</form>

app.module.ts

@NgModule({
  declarations: [
  AppComponent,
  [...]
  ],
  imports: [
  FormsModule,
  [...] 
  ],
  providers: [],
  bootstrap: [AppComponent]
})

Initializing an Elasticsearch DB via file

I have an instance of elasticsearch running inside a Docker-container.

For testing purposes I would like to initialize the db with a set of data. Similar to postgres where one could write .sql files which where executed on container startup. What is the best way to accomplish this with elasticsearch?

Visual Studio Live Testing - Start failed because TestPlatform V2 can not be located

No matter what I do, I can't seem to get live testing to start in Visual studio 2017 (15.3.5)

I get this error: Start failed because TestPlatform V2 can not be located. Please try to modify your Visual Studio installation through Microsoft Visual Studio Installer, and make sure Testing tools core features is selected under Individual components -> Debugging and testing [13:23:22.396 Info] For more information, see http://ift.tt/2wlTCbm.

I have installed the required component and it still won't run. Although the tests will run if I manually run them.

Any help is appreciated.

Smoke Testing Web API for invalid arguments / omission with Postman

I'm trying to automate tests for Swagger defined Web API project. Firstly, I'd like to test for 5xx errors caused by invalid requests/handling of arguments passed to controller.
I have this in my single non automated test for a single endpoint:

tests["status code not 500"] = responseCode.code != 500;

But I'm not sure if there's a way to automate/generate all the URL parameter combinations from Swagger definition, or must this be done for all the methods in the API.
I'm aware of variables/environments, but don't know if there's a way to make multiple test passes for single endpoint with different variable values, or at least null/undefine them for test pass > 1.
I can't use npm, so no Newman for me.

Testing rationale between API and Service layer

I've been dwelling on my testing approach for API development and was hoping to get some clarity concerning what to test and what not to test within an API > Service layer relationship.

The service layer specs cover general functionality, asserting that if you x it will do y

The ambiguity for me lies in what to test for at the API level. If I'm abdicating responsibility for the functionality to the service layer, do I need to assert that functionality again in the API specs?

I can't decide if it is redundant or not.

Can you feel confident as long as the service layer specs pass that the API is functioning correctly or should I assert the functionality again in the API specs?

Should I assert that the API is calling the service correctly to compensate?

I appreciate this is a bit vague but was hoping to get some idea of how others approach this situation.

Testing reading list from yaml file into list object

I'm trying to load a list from application-test.yaml into the class, but it doesn't work.

This is how my application-test.yaml looks like:

category-merchandiser:
-
  username: test
  password: blas3cret
  roles:
    - test
    - admin

And this is my class:

@Configuration
@ConfigurationProperties(prefix = "category-merchandiser")
public class CategoryMerchandiserUsersConfig {

    private List<CategoryMerchandiser> users = new ArrayList<>();
    ** getters & setters **

    public static class CategoryMerchandiser {

        private String username;
        private String password;
        private List<String> roles = new ArrayList<>();

    ** getters & setters **

This is how i'm trying to test if the size of my list is 2:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = CategoryMerchandiserUsersConfig.class)
public class CategoryMerchandiserUsersTest {

    @Autowired
    CategoryMerchandiserUsersConfig config;

    @Test
    public void shouldLoadApplicationYamlToClass() {
        assertThat(config.getUsers().size(), is(2));
    }
}

My console output tells me that the list is empty:

java.lang.AssertionError: 
Expected: is <2>
     but: was <0>
Expected :is <2>

Actual   :<0>

What am i doing wrong? Hope you guys can help me.

@Configuration not getting loaded when in test package

Introduction

I have a repository PermissionRep that collects Permission beans (defined by other services) and uses it as resource.

A example service would define it's Permission beans to allow access this way:

@Configuration
public class SomePermissions extends Permissions {

    private static final String BASE = "some";
    public static final String SOME_ACCESS = join(BASE, "access");

    @Bean()
    Permission getAccessPermission() {
    return new Permission(SOME_ACCESS);
    }
}

Problem

Now would like to test my repository PermissionRep. As there is no other service available during the test, there will be no Permission beans. Therefor I would like to use a TestPermissions configuration class like:

@Configuration
public class TestPermissions extends Permissions {

    private static final String BASE = "test";
    public static final String TEST_READ = join(BASE, READ);
    public static final String TEST_WRITE = join(BASE, WRITE);

    @Bean()
    Permission getReadPermission() {
    return new Permission(TEST_READ);
    }

    @Bean()
    Permission getWritePermission() {
    return new Permission(TEST_WRITE);
    }
}

I did create this in my test package, but Spring won't load the configuration class. If I point to the configuration class @ContextConfiguration(classes = TestPermissions.class) the defined beans are getting initialized. But therefor the PermissionRep bean which I want to test is not getting created.

How can I add a @Configuration class for testing purposes to my test environment, without messing with the default configurations?

the loop does not iterate on factory boy class

I'm using factory boy for the testing my user (Customer). I have created class UserFactoryCustomer for my customer User.

# factories.py
class UserFactoryCustomer(factory.django.DjangoModelFactory):

    class Meta:
        model = User
        django_get_or_create = ('first_name', 'last_name', 'username', 'email', 'user_type', 'balance')

    first_name = 'Ahmed'
    last_name = 'Asadov'
    username = factory.LazyAttribute(lambda o: slugify(o.first_name + '.' + o.last_name))
    email = factory.LazyAttribute(lambda a: '{0}.{1}@example.com'.format(a.first_name, a.last_name).lower())
    user_type = 1
    balance = 10000.00

# test_serializers.py
class ApiSerilizerTestCase(TestCase):
    def test_get_status_code_200(self):
        customers = factories.UserFactoryExecutor.objects.all()
        for customer in customers:
            self.assertEqual(customer.get('username').status_code, 200)

I'm getting this mistake

Creating test database for alias 'default'...
System check identified no issues (0 silenced).
..E
======================================================================
ERROR: test_get_status_code_200 (tests.test_serializers.ApiSerilizerTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/heartprogrammer/Desktop/freelance-with-api/freelance/tests/test_serializers.py", line 20, in test_get_status_code_200
    customers = factories.UserFactoryExecutor.objects.all()
AttributeError: type object 'UserFactoryExecutor' has no attribute 'objects'

----------------------------------------------------------------------
Ran 3 tests in 0.246s

FAILED (errors=1)

I want to bring all my customers who are in the class and UserFactoryCustomer to test them

javascriptexecutor not working with angularjs forms

I am trying to automate a login form by selenium webdriver (javascriptexucutor) which is built in angularjs. The script is entering data in email textbox; but when submit button is clicked error message shows that textbox is not filled. I have also used events like onkeyup, blur; but error shows these functions are not accepted. Textbox validaton works fine with sendkeys() and it takes time; but facing issue when javascriptexecutor is used. html code:

<input type="text" ng-keypress="logindata($event)" ng-model="email" placeholder="Email ID" value="" name="email" class="form-control ng-pristine ng-valid ng-touched" id="email">`

selenium-java code:

JavascriptExecutor executor = (JavascriptExecutor) driver; 
WebElement emailElement = driver.findElement(By.id("email"));
executor.executeScript("arguments[0].value='tester'", emailElement);

mercredi 27 septembre 2017

Javascript React-Native: Icon, UI, AccessibilityLabel

in our react-native project we have

<Item>
              <Icon name="user" />
              <Input accessibilityLabel={'username'} placeholder="Username" .../>
</Item>

The problem is I need to use the accessibilityLabel for e2e testing using Appium and Cucumber.js

The icon produces the following in Accessibility Inspector.

enter image description here

Without using Xpath, is there any way to get around this?

react-native-cli: 2.0.1 node: v8.4.0 appium: v1.7.1

Cannot read property 'use_env_variable' of undefined

I am using http://ift.tt/1bLYyEM to learn using the database migrations. This works fine.

As a next step, I would like to be able to run tests. My test is very simple (not using anything related to Sequelize) and looks like

import request from 'supertest';
import app from '../src/app.js';

describe('GET /', () => {
  it('should render properly', async () => {
    await request(app).get('/').expect(200);
  });
});

describe('GET /404', () => {
  it('should return 404 for non-existent URLs', async () => {
    await request(app).get('/404').expect(404);
    await request(app).get('/notfound').expect(404);
  });
});

When I run this as npm test, I get error as

➜  contactz git:(master) ✗ npm test

> express-babel@1.0.0 test /Users/harit/bl/sources/webs/q2/contactz
> jest

 FAIL  test/routes.test.js
  ● Test suite failed to run

    TypeError: Cannot read property 'use_env_variable' of undefined

      at Object.<anonymous> (db/models/index.js:11:11)
      at Object.<anonymous> (src/routes.js:3:14)
      at Object.<anonymous> (src/app.js:4:15)
      at Object.<anonymous> (test/routes.test.js:2:12)
          at Generator.next (<anonymous>)
          at Promise (<anonymous>)

Test Suites: 1 failed, 1 total
Tests:       0 total
Snapshots:   0 total
Time:        0.702s
Ran all test suites.
npm ERR! Test failed.  See above for more details.
➜  contactz git:(master) ✗

I am new to NodeJS, Express so not sure what is going wrong here.
The code is available at http://ift.tt/2xMZC0w

Could someone please help me know what's wrong and how to fix it?

Angular 4 Jasmine Actual is not a Function

I'm doing expect(ClassUnderTest.someMethod(withSomeParams)).toThrow() and I'm getting:-

Error: <toThrow> : Actual is not a Function
Usage: expect(function() {<expectation>}).toThrow(<ErrorConstructor>, <message>)

and I don't understand the usage example.

I tried expect(() => ClassUnderTest.someMethod(withSomeParams)).toThrow() I got Expected function to throw an exception.. And tried:-

ClassUnderTest.someMethod(withSomeParams)
              .subscribe( res => res,
                          err => console.log(err))

and I got Error: 1 periodic timer(s) still in the queue.

I don't understand how to write this expectation for when the error is thrown.

Instrumented Unit Class Test - Can't create handler inside thread that has not called Looper.prepare()

Sorry I've looked at tutorials all over the place and not found the answer I'm looking for. I'm following Google's Tutorial at the moment here: http://ift.tt/1L3U2Cy

I'm trying to create an Instrumented test and when I run it I'm getting the error : java.lang.RuntimeException: Can't create handler inside thread that has not called Looper.prepare()

So my test is as follows:

    package testapp.silencertestapp;

import android.support.test.filters.SmallTest;
import android.support.test.runner.AndroidJUnit4;

import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;

import static org.hamcrest.Matchers.is;
import static org.junit.Assert.assertThat;

@RunWith(AndroidJUnit4.class)
@SmallTest
public class MainActivityTest {

    private MainActivity testMain;

    @Before
    public void createActivity(){
        testMain = new MainActivity();
    }

    @Test
    public void checkFancyStuff(){
        String time = testMain.createFancyTime(735);

        assertThat(time, is("07:35"));
    }
}

And I'm trying to run a method inside the main activity is as follows (this is an extract):

public class MainActivity extends AppCompatActivity {

    private TimePicker start;
    private TimePicker end;

@Override
    protected void onCreate(Bundle savedInstanceState) {
        start = (TimePicker) findViewById(R.id.startPicker);
        end = (TimePicker) findViewById(R.id.endPicker);
}

 String createFancyTime(int combinedTime) {

        StringBuilder tempString = new StringBuilder(Integer.toString(combinedTime));
        if(tempString.length()==4){
            tempString = tempString.insert(2, ":");
        }
        else if (tempString.length()==3){
            tempString = tempString.insert(1, ":");
            tempString = tempString.insert(0, "0");
        }
        else if(tempString.length()==2){
            tempString = tempString.insert(0, "00:");
        }
        else if(tempString.length()==1){
            tempString = tempString.insert(0, "00:0");
        }
        return tempString.toString();
    }

I presume that this is an issue because I'm not starting the service up correctly or something - I've tried using several ways, but I just get errors everywhere. Searching on here and this error is popular, but not in relation to testing, so wondered if anyone could kindly point me in the right direction so I can test the methods in this class?

Run Jest test in IntelliJ IDEA

I created a React app with create-react-app version 1.4.0 and opened the resulting project in IntelliJ. Now I am attempting to run the generated test in IntelliJ as well. I get the following output when I do so:

/usr/bin/node /home/l/src/hello-react/node_modules/jest/bin/jest --config {\"rootDir\":\"/home/l/src/hello-react\",\"transformIgnorePatterns\":[\"/node_modules/\",\"^/home/l/bin/idea-IU-172.3544.35/plugins/JavaScriptLanguage/helpers\"],\"unmockedModulePathPatterns\":[\"^/home/l/bin/idea-IU-172.3544.35/plugins/JavaScriptLanguage/helpers\"]} --colors --setupTestFrameworkScriptFile /home/l/bin/idea-IU-172.3544.35/plugins/JavaScriptLanguage/helpers/jest-intellij/lib/jest-intellij-jasmine.js --testPathPattern ^/home/l/src/hello\-react/src/App\.test\.js$ --testNamePattern "^(test )?renders without crashing$"
 FAIL  src/App.test.js
  ● Test suite failed to run

    /home/l/src/hello-react/src/App.test.js: Unexpected token (7:18)
        5 | it('renders without crashing', () => {
        6 |   const div = document.createElement('div');
      > 7 |   ReactDOM.render(<App />, div);
          |                   ^
        8 | });
        9 | 

Test Suites: 1 failed, 1 total
Tests:       0 total
Snapshots:   0 total
Time:        2.73s
Ran all test suites matching /^/home/l/src/hello\-react/src/App\.test\.js$/ with tests matching "^(test )?renders without crashing$".

Process finished with exit code 1
Empty test suite.

I am only running the generated code. I have not modified it or written any of my own code. What do I need to do in order to be able to run this test in IntelliJ IDEA?

For reference, here are the important files:

App.test.js

import React from 'react';
import ReactDOM from 'react-dom';
import App from './App';

it('renders without crashing', () => {
  const div = document.createElement('div');
  ReactDOM.render(<App />, div);
});

App.js

import React, { Component } from 'react';
import logo from './logo.svg';
import './App.css';

class App extends Component {
  render() {
    return (
      <div className="App">
        <div className="App-header">
          <img src={logo} className="App-logo" alt="logo" />
          <h2>Welcome to React</h2>
        </div>
        <p className="App-intro">
          To get started, edit <code>src/App.js</code> and save to reload.
        </p>
      </div>
    );
  }
}

export default App;

package.json

{
  "name": "hello-react",
  "version": "0.1.0",
  "private": true,
  "dependencies": {
    "react": "^16.0.0",
    "react-dom": "^16.0.0",
    "react-scripts": "1.0.13"
  },
  "scripts": {
    "start": "react-scripts start",
    "build": "react-scripts build",
    "test": "react-scripts test --env=jsdom",
    "eject": "react-scripts eject"
  }
}

How do you define npm test to run cucumber cross platform?

When I was trying out Jasmine, I could simply add the following to my package.json file:

"scripts": {
    "test": "jasmine"
},

Then when I ran npm test from a Windows command prompt, npm would run my Jasmine tests.

However, with Cucumber-JS, I tried "test": "cucumber" but that simply opened node_modules\.bin\cucumber.js in my editor. I seem to have to use the following to actually run the tests:

"scripts": {
    "test": "cucumber.js.cmd"
},

That seems very platform-specific, and I'd rather not make this difficult for co-developers using Mac or Linux.

Is there a value I can use for the "test" command that would work on multiple operating systems?

Django's TestCase doesn't seem to clean up database between tests

I have written a test to check the integrity of the data I keep in my fixtures. It is a class inheriting from django.tests.TestCase, and it declares the fixtures it needs. When I run the methods in this class only, they pass. However, when I run all of my tests, some fixtures from other tests remain in the db, which makes the tests fail. I tried different variants, and for now I am overriding the _fixture_setup method to kill all db data before my test, but this can't be right. There must be a better way :)

class FixturesTest(TestCase):
fixtures = ["tariff.json"]

    def _fixture_setup(self):
        TariffModel.objects.all().delete()
        super()._fixture_setup()

    def test_tariff_fixtures(self):
        """Check the fixtures for tariffs"""
        ...

Thanks.

Debugging (diagnostic) of graphics pipeline for compiled program

Sorry for noob question, but is there the way how to debug (or more correctly - diagnostics) a compiled program for works with videocard and graphics pipeline: how much video memory is spent for current program, size of buffers (VBO,IBO,FBO), GPU's load for current program, and etc.? I have GPU Caps Viewer, but I think there is more detailed program.

Using Behat Predefined steps, i want to select value from this dropdown

enter image description hereI am trying to select third option from this dropdown (shown in image below). i want to do it using Behat/mink predefined steps. Any suggestiions? enter image description here

Test post, please ignore this

Hi I'm just testing out how questions work on this site please ignore this post

code

italic bold

quote

Testing aiohttp client with unittest.mock.patch

I've written a simple HTTP client using aiohttp and I'm trying to test it by patching aiohttp.ClientSession and aiohttp.ClientResponse. However, it appears as though the unittest.mock.patch decorator is not respecting my asynchronous code. At a guess, I would say it's some kind of namespacing mismatch.

Here's a minimal example:

from aiohttp import ClientSession

async def is_ok(url:str) -> bool:
    async with ClientSession() as session:
        async with session.request("GET", url) as response:
            return (response.status == 200)

I'm using an asynchronous decorator for testing, as described in this answer. So here's my attempted test:

import unittest
from unittest.mock import MagicMock, patch

from aiohttp import ClientResponse

from my.original.module import is_ok

class TestClient(unittest.TestCase):
    @async_test
    @patch("my.original.module.ClientSession", spec=True)
    async def test_client(self, mock_client):
        mock_response = MagicMock(spec=ClientResponse)
        mock_response.status = 200

        async def _mock_request(*args, **kwargs):
            return mock_response

        mock_client.request = mock_response

        status = await is_ok("foo")
        self.assertTrue(status)

My is_ok coroutine works fine when it's used in, say, __main__, but when I run the test, it gives me an error that indicates that the session.request function has not been mocked per my patch call. (Specifically it says "Could not parse hostname from URL 'foo'", which it should if it weren't mocked.)

I am unable to escape this behaviour. I have tried:

  • Importing is_ok after the mocking is done.
  • Various combinations of assigning mocks to mock_client and mock_client.__aenter__, setting mock_client.request to MagicMock(return_value=mock_response), or using mock_client().request, etc.
  • Writing a mock ClientSession with specific __aenter__ and __aexit__ methods and using it in the new argument to patch.

None of these appear to make a difference. If I put assertions into is_ok to test that ClientSession is an instance of MagicMock, then these assertions fail when I run the test (as, again, they would when the code is not patched). That leads me to my namespacing mismatch theory: That is, the event loop is running in a different namespace to which patch is targeting.

Either that, or I'm doing something stupid!

Run tests with maven surfire plugin after wildfly:deploy execution

How can I run my test with surfire plugin after the remote wildfly deployment.

This is my pom.xml file:

<build>
    <finalName>${project.artifactId}</finalName>
        <pluginManagement>
            <plugins>

                <plugin>
                    <groupId>org.apache.maven.plugins</groupId>
                    <artifactId>maven-compiler-plugin</artifactId>
                    <version>3.7.0</version>
                    <configuration>
                        <source>1.8</source>
                        <target>1.8</target>
                    </configuration>
                </plugin>

                <plugin>
                        <groupId>org.wildfly.plugins</groupId>
                        <artifactId>wildfly-maven-plugin</artifactId>
                        <version>1.0.2.Final</version>
                        <configuration>
                            <hostname>myip</hostname>
                            <port>myport</port>
                            <username>myusername</username>
                            <password>mypassword</password>
                        </configuration>
                </plugin>

            </plugins>
        </pluginManagement>         
</build>

Maybe overriding surfire plugin to run integration test phase?.. how can I do it?

PAL reports generate empty htm file

I'm looking for some help in this matter urgently.

I recorded the performance of my servers with Performance Monitor during exactly one week and the (.blg) files that were generated have approximately 11 GB each one.

After that, I tried to use PAL to generate some reports to use some of the generated charts. However, PAL only generates a (.htm) file and not (.html) file with the size of 0. The file that is being generated is empty.

In the definitions of the file I specifically choose to restrict the time to 1 day of the file to be analyzed (.blg). Any ideas on how to resolve this issue?

image showing the issue

Thanks!

creating test classes or units for classes in eclipse

I have a very large project application that I do not know much about. I think I have identified the classes I need to work with but not sure. I cannot just click and run application as java application as there are so many components. I would like to create a test in order to test the class I think is the one I need to reference but don't know much about unit testing. How Would I write a test method for example for the following method;

public AudioFetcher(NetworkConnctor eRid,String eventID) throws 
  ServerException{
    eKid = eRid;
    this.eventID = eventID; 
    eof = false;
    calculateSize();
}

Will this even work? If not how do I identify the functionality of the different classes with no documentation or previous developers?

Capture properties of interactive elements on a executing program

In graphical user interface testing (GUI Testing) for desktop-based/website, all of record and replay tools can capture interactive elements on a executing program. In Ranorex (http://ift.tt/2fxQOoC) or Selenium (http://ift.tt/zDsxQY), they can capture most of elements which you clicked/selected/etc.

For example, you start your program, then you click a button "OK". As a result, Ranorex/Selenium detect the exact type of this button, its content ("OK"), its location on the screen, etc.

How these tools can capture these elements? Any predictions based on your knowledge; or any scientific papers on this question are appricated.

R ggplot2: boxplots with significance level (more than 2 groups: kruskal.test and wilcox.test pairwise) and multiple facets

Following up on this question, I am trying to make boxplots and pairwise comparisons to show levels of significance (only for the significant pairs) again, but this time I have more than 2 groups to compare and more complicated facets.

I am going to use the iris dataset here for illustration purposes. Check the MWE below where I add an additional "treatment" variable.

library(reshape2)
library(ggplot2)
data(iris)
iris$treatment <- rep(c("A","B"), length(iris$Species)/2)
mydf <- melt(iris, measure.vars=names(iris)[1:4])
ggplot(mydf, aes(x=variable, y=value, fill=Species)) + geom_boxplot() +
stat_summary(fun.y=mean, geom="point", shape=5, size=4) +
facet_grid(treatment~Species, scales="free", space="free_x") +
theme(axis.text.x = element_text(angle=45, hjust=1)) 

This produces the following plot:

plot

The idea would be to perform a Kruskal-Wallis test across the "variable" groups (Sepal.Length, Sepal.Width, Petal.Length, Petal.Width), and pairwise Wilcoxon tests between them, PER FACET defined by "Species" and "treatment".

It would most likely involve updating the annotation like in my previous question.

In other words, I want to do the same as in this other question I posted, but PER FACET.

I am getting horribly confused and stuck, though the solution should be quite similar... Any help would be appreciated!! Thanks!!

Oracle Openscript 13.1 Image based recording does not work

I have downloaded latest Oracle openscript 13.1 which has new feature of i mage based recording. but When I start Image recorder and click on capture it does not allow me to capture the image on forms or web.

Please any one can help to solve this issue.

how to change gmock verbosity level

I want to change gmock warnings into errors in all uninteresting gmock function calls. I search some and I found that i should use --gmock_verbose=LEVEL flag but where should I add this flag? Unfortunately I couldn't change all NiceMocks to StrictMocks.

how does a test client work

maybe this question is not relevant here. let me know.

I'm trying to find out how exactly a test_client (my case, is a Flask test_client) in general works

I traced back to the werkzeug.test.Client description (for my case, specifically)

"This class allows to send requests to a wrapped application."

But what happens under the hood? the application doesn't really run a port?

What kind of test is this? Android

I have seen some videos where it explains how to test the call of an ativity by another ativity. I did the following test and after some reflection I was not sure if this test I performed is integration, instrumentation or functional test. Someone can help me?

public class OneActivityTest {

    private OneActivity mActivity = null;

    @Rule
    public ActivityTestRule<OneActivity> mActivityTestRule = new ActivityTestRule<>(OneActivity.class);

    Instrumentation.ActivityMonitor monitor = getInstrumentation().addMonitor(TwoActivity.class.getName(), null, false);

    @Before
    public void setUp() {
        mActivity = mActivityTestRule.getActivity();
    }

    @Test
    public void checkYes() {

        Assert.assertNotNull(mActivity.findViewById(R.id.checkbox_sim));

        onView(withId(R.id.checkbox_sim)).perform(click());

        Assert.assertNotNull(mActivity.findViewById(R.id.save));

        onView(withId(R.id.save)).perform(click());

        Activity secondActivity = getInstrumentation().waitForMonitorWithTimeout(monitor, 5000);

        Assert.assertNotNull(secondActivity);

        secondActivity.finish();
    }
}

Mockito failing basic example

I'm new to mockito so I'm trying to learn with some basic examples.

Here is my service.

public class MyCoolServiceImpl implements MyCoolService{

    public String getName() {
        return "String from service";
    }

}

MyCoolService is just an interface

public interface MyCoolService {

    public String getName();
}

And I have a simple use case:

public class SomeUseCase {
    private MyCoolService service = new MyCoolServiceImpl();

    public String getNameFromService(){
        return service.getName();
    }
}

Mothing complicated. So I write my test class as follows:

public class SomeUseCaseTest {
    @Mock
    MyCoolService service;

    SomeUseCase useCase = new SomeUseCase();

    @Before
    public void setUp(){
        initMocks(this);

        when(service.getName()).thenReturn("String from mockito");
    }


    @Test
    public void getNameTest(){

        String str = useCase.getNameFromService();

        assertEquals("String from mockito", str);
    }
}

So, as I understand, str should contains "String from mockito", since I'm telling mockito to return that string when service.getName() is called, however my test fails because it returns "String from service".

What am I missing here? Did I misunderstood how mockito works?

Webdriver is an interface

Why WebDriver is an interface?

why it should implemented by all driver classes

This question is asked for selenium interview in xento systems.

Xamarin.forms application testing on iOS device

I am developing an application in Xamarin.forms cross platform. I can test my application on my anroid device. But I can not test it on an iOS device. How can I test the iPhone phone if I can test it after I connect the android device with cable without putting the app in the app store? I tried the way to build the Ipa, but I did not. If so, can you express it clearly? Because I am very new in mobile application yet. Thanks in advance for your interest.

Verify a download large data in automation testing

I have to download 10gb data on device (surface or iPad) before execute testing and it takes long time to complete.

Is there anyway to check(for automation testing) whether a download has started or not, successful or not when I performed download?

Thanks in advance!!!

How to get the code coverage in Angular Dart?

I've tried using package coverage to get the code coverage from my Angular Dart 4.0 project.

The following steps are my working flow:

  1. run pub serve test --port=8081
  2. run pub run test --pub-serve=8081 --pause-after-load
  3. waiting for test runner print the observatory URL (e.g. http://127.0.0.1:12345)
  4. run pub global run coverage:collect_coverage --port=12345 -o coverage.json
  5. press Enter to resume test runner
  6. run pub global run coverage:format_coverage --packages=.packages-c -i coverage.json -o coverage.report

After step 5, I will get a coverage.json file which contans some data, but I don't know why the coverage report always shows each code is uncovered.

Am I using a wrong flow for getting code coverage? Or have any better way to do it?

Thanks !


There are my experiment code:

experiment_component.dart

import 'package:angular/angular.dart';

@Component(
  selector: 'experiment_component',
  template: '<h1></h1>'
)
class ExperimentComponent {

  String greeting = '';

  void greet() {
    greeting = 'Hello';
  }

}

experiment_component_test.dart

@TestOn('browser')
@Tags(const ['aot'])
import 'package:angular/angular.dart';
import 'package:angular4_test/src/experiment_component.dart';
import 'package:angular_test/angular_test.dart';
import 'package:test/test.dart';

NgTestBed<ExperimentComponent> testBed;
NgTestFixture<ExperimentComponent> testFixture;

@AngularEntrypoint()
void main() {

  tearDown(disposeAnyRunningTest);

  setUp(() async {
    testBed = new NgTestBed<ExperimentComponent>();
    testFixture = await testBed.create();
  });

  test('should render Hello', () async {
    await testFixture.update((comp)=> comp.greet());
    expect(testFixture.text, contains('Hello'));
  });

}

.packages-c

angular4_test:lib/

coverage.report

 |import 'package:angular/angular.dart';
 |
 |@Component(
 |  selector: 'experiment_component',
 |  template: '<h1></h1>'
 |)
 |class ExperimentComponent {
 |
 |  String greeting = '';
 |
 |  void greet() {
0|    greeting = 'Hello';
 |  }
 |
 |}

foreign key in testing django with factory boy doesn't work

I have a problem with foreign key in library factory boy. My test doesn't execute I thinking that the problem in foreign key.

I try to test user model which is in user.models that is how my code look like

class Task(models.Model):
    title = models.CharField(max_length=255, verbose_name='Заголовок')
    description = models.CharField(max_length=255, verbose_name='Описание')
    cost = models.DecimalField(max_digits=7, decimal_places=2, default=0, verbose_name='Цена')
    assignee = models.ForeignKey('users.User', related_name='assignee', null=True, verbose_name='Исполнитель')
    created_by = models.ForeignKey('users.User', related_name='created_by', verbose_name='Кем был создан')

    def __str__(self):
        return self.title

I test it with factory boy that is how my factory boy class looks like

class UserFactoryCustomer(factory.Factory):

    class Meta:
        model = User

    first_name = 'Ahmed'
    last_name = 'Asadov'
    username = factory.LazyAttribute(lambda o: slugify(o.first_name + '.' + o.last_name))
    email = factory.LazyAttribute(lambda a: '{0}.{1}@example.com'.format(a.first_name, a.last_name).lower())
    user_type = 1
    balance = 10000.00

class UserFactoryExecutor(factory.Factory):

    class Meta:
        model = User

    first_name = 'Uluk'
    last_name = 'Djunusov'
    username = factory.LazyAttribute(lambda o: slugify(o.first_name + '.' + o.last_name))
    email = factory.LazyAttribute(lambda a: '{0}.{1}@example.com'.format(a.first_name, a.last_name).lower())
    user_type = 2
    balance = 5000.00


class TaskFactory(factory.Factory):

    class Meta:
        model = Task

    title = factory.Sequence(lambda n: 'Title {}'.format(n))
    description = factory.Sequence(lambda d: 'Description {}'.format(d))
    cost = 5000.00
    assignee = factory.SubFactory(UserFactoryExecutor)
    created_by = factory.SubFactory(UserFactoryCustomer)

That is the example of my test

class ApiModelTestCase(TestCase):

     def test_creating_models_instance(self):
         executor = factories.UserFactoryExecutor()
         customer = factories.UserFactoryCustomer()
         Task.objects.create(title="Simple Task", description="Simple Description", cost="5000.00",
                            assignee=executor, created_by=customer)

This is how my mistake

ERROR: test_creating_models_instance (tests.test_models.ApiModelTestCase)

Traceback (most recent call last):
  File "/Users/heartprogrammer/Desktop/freelance-with-api/freelance/tests/test_models.py", line 12, in test_creating_models_instance
    assignee=executor, created_by=customer)
  File "/Users/heartprogrammer/Documents/envs/freelance/lib/python3.6/site-packages/django/db/models/manager.py", line 85, in manager_method
    return getattr(self.get_queryset(), name)(*args, **kwargs)
  File "/Users/heartprogrammer/Documents/envs/freelance/lib/python3.6/site-packages/django/db/models/query.py", line 394, in create
    obj.save(force_insert=True, using=self.db)
  File "/Users/heartprogrammer/Documents/envs/freelance/lib/python3.6/site-packages/django/db/models/base.py", line 763, in save
    "unsaved related object '%s'." % field.name
ValueError: save() prohibited to prevent data loss due to unsaved related object 'assignee'.

mardi 26 septembre 2017

Unit testing a void method in an internal static class

How do we go about testing a void method without instantiating the object in the test class? I understand in an internal static class we are unable to use "new" to create an object to test it. I've unit tested by creating objects for non static class but I haven't dealt with an internal static class and am stuck. Any guidance would be great.

Here is the class/method in question:

internal static class Util
{
    public static void AssertBytesEqual(byte[] expected, byte[] actual)
    {
        if (expected.Length != actual.Length)
        {
            Debugger.Break();
            throw new CryptographicException("The bytes were not of the expected length.");
        }

        for (int i = 0; i < expected.Length; i++)
        {
            if (expected[i] != actual[i])
            {
                Debugger.Break();
                throw new CryptographicException("The bytes were not identical.");
            }
        }
    }
}

How to access request properties on nock

I am trying to test the request URL that gets called on nock but couldn't figure out how get access to it. The code looks like this:

nock(host).get(`/user/${user_uuid}`).reply(200, res);

user.searchByUUID(user_uuid).then( response => {
  //todo: test request url, no request object?
  expect(response.data).toEqual(res.data)
  done();
});

I've created a ticket on github but unfortunately gr2m's reply doesn't solve the problem since the path that gets returned is the interceptor path rather than request's path.

Example: requesting path http://localhost:8080/user/1023 should show all the request's properties (headers, url, etc.) and not nock's headers and url.

Could not find declaration file for enzyme-adapter-react-16?

I've been using Enzyme to test components in my React application for some time. After updating my packages for the first time in a few weeks I've started getting an error from my tests.

 FAIL  src/__tests__/title.test.ts
  ● Testing title component › renders
Enzyme Internal Error: Enzyme expects an adapter to be configured, but found none. [...] 
To find out more about this, see http://ift.tt/2y78RJE

I proceed to install 'enzyme-adapter-react-16' as described in the link and add the following lines to my test file:

import * as enzyme from 'enzyme';
import Adapter from 'enzyme-adapter-react-16';
enzyme.configure({ adapter: new Adapter() });

However, as my application is written in TypeScript, I now run into two new problems.

error1 error2

To clarify the images the first error is that there aren't any types for enzyme-adapter-react-16, and the second error says that enzyme does not have the property configure.

I'm relatively new to TypeScript so I need some help. I've tried to install types for enzyme-adapter-react-16 but those do not appear to exist.

Should I attempt to add them myself, or is there some way I can avoid this problem all together?

Also curious what made this error appear. I didn't need an Adapter before, why now?

When to retest my javascript

I've run into an interesting situation where I took the accepted solution to this question about how to center a popup window and it worked great. I heavily tested it at the time in all versions of IE (utilizing simulation mode) that my clients were currently using without issue. Satisfied with the result, we put the update to our software in a queue for deployment.

NOTE: This question is not about centering a popup window

Couple months later, time for deployment and it all of sudden was not working in the most current version of IE (I believe there was an update to IE on our network since the testing). The popup window was now being positioned off the visible screen. To quickly correct this for deployment, I had to hard code the top and left values to buy me the time I need to troubleshoot the issue.

This peculiar behavior has me wondering a few things about this one topic.

Should I be retesting my javascript code every time a relevant browser is updated? Does anyone here already do this? And perhaps follow some kind of standard they can share?

It would be really cumbersome to go back and retest the already tried and true javascript in our software, but it could be more trouble for me if something just stops working.

Class cannot resolve module as content unless @Stepwise used

I have a Spock class, that when run as a test suite, throws Unable to resolve iconRow as content for geb.Page, or as a property on its Navigator context. Is iconRow a class you forgot to import? unless I annotate my class with @Stepwise. However, I really don't want the test execution to stop on the first failure, which @Stepwise does.

I've tried writing (copy and pasting) my own extension using this post, but I still get these errors. It is using my extension, as I added some logging statements that were printed out to the console.

Here is one of my modules:

class IconRow extends Module {
    static content = {
        iconRow (required: false) {$("div.report-toolbar")}
    }
}

And a page that uses it:

class Report extends SomeOtherPage {
    static at = {$("div.grid-container").displayed}

    static content = {
        iconRow { module IconRow }
    }
}

And a snippet of the test that is failing:

class MyFailingTest extends GebReportingSpec {

    def setupSpec() {
        via Dashboard
        SomeClass.login("SourMonk", "myPassword")
        assert page instanceof Dashboard

        nav.goToReport("Some report name")
        assert page instanceof Report
    }

    @Unroll
    def "I work"() {
        given:
        at Report

        expect:
        this == that

        where:
        this << ["some list", "of values"]
        that << anotherModule.someContent*.@id
    }

    @Unroll
    def "I don't work"() {
        given:
        at Report

        expect:
        this == that

        where:
        this << ["some other", "list", "of values"]
        that << iconRow.columnHeaders*.attr("innerText")*.toUpperCase()
    }
}

When executed as a suite I work passes and I don't work fails because it cannot identify "iconRow" as content for the page. If I switch the order of the test cases, I don't work will pass and I work will fail. Alternatively, if I execute each test separately, they both pass.

What I have tried:

  • Adding/removing the required: true property from content in the modules
  • Prefixing the module name with the class, such as IconRow.iconRow
  • Defining my modules as static @Shared properties
  • Initialize the modules both in and outside of my setupSpec()
  • Making simple getter methods in each module's class that return the module, and referencing content such as IconRow.getIconRow().columnHeaders*.attr("innerText")*.toUpperCase()
  • Moving the contents of my setupSpec() into setup()
  • Adding autoClearCookies = false into my GebConfig.groovy
  • Making a @Shared Report report variable and prefix all modules with that such as report.iconRow

Very peculiar note about that last bullet point -- it magically resolves the modules that don't have the prefix -- so it won't resolve report.IconRow but will resolve just iconRow -- absolutely bizarre, because if I remove that variable the module that was just previously working suddenly can't be resolved again. I even tried declaring this variable and then not prefixing anything, and that did not work either.

Another problem that I keep banging my head against the wall with is that I'm also not sure of where the problem is. The error it throws leads me to believe that it's a project setup issue, but running each feature individually works fine, so it appears to be resolving the classes just fine.

On the other hand, perhaps it's an issue with the session and/or cookies? Although I have yet to see any official documentation on this, it seems to be the general consensus (from other posts and articles I've read) that only using @Stepwise will maintain your session between feature methods. If this is the case, why is my extension not working? It's pretty much a copy and paste of @Stepwise without the skipFeaturesAfterFirstFailingFeature method (I can post if needed), unless there is some other stuff going on behind the scenes with @Stepwise.

Apologies for the wall of text, but I've been trying to figure this out for about 6 hours now, so my brain is pretty fried.

Tools for generating Armstrong relations/tables?

I am looking for any tools that can generate an Armstrong relation (table) given a database table definition.

An informal definition of an Armstrong relation is a table with example data that satisfy all constraints implied by the constraints on the table and also violate all constraints not implied by the constraints on the table.

Note that this is different from arbitrarily generating random data for a database table in the following ways:

  1. There is no guarantee that arbitrarily generated random data will violate constraints not implied by the table.
  2. The data generated is likely to not be minimal whereas with Armstrong relations minimality is the aim.

Up to now the only tools I could find that can generate an Armstrong relation are written by academia and are not maintained/enhanced.

Apply statistical tests on multiple datasets for two methods comparison

I want to apply paired t-test test on the accuracy results from two classifiers on multiple datasets, to determine if the differences are significant. so the accuracy results for the 2 classifiers are x = [97.09 77.72 80.32 77.33 81.77 95.58 92.13] y = [96.2 72.4 77.5 74.6 88.98 92.7 72.4] I did the test on Matlab using the command [h,p] = ttest(x,y,'Alpha',0.05) but the hypothesis failed to reject and give me h=0 and p= 0.2493 there is a clear difference between the two classifiers why it is rejected?

Moving tester from alpha to beta testing on Google Play

We are using Google Play console for testing our Android application. First, we created an alpha version with some testers added by email address. Now, we are releasing a Beta version with a different group of testers. We have checked alpha-tester group for alpha and a named beta-tester group for beta version.

Just in case, let you know that Alpha version has a higher number than Beta version.

Issue that we have, some testers of alpha group have been moved to beta-testers (and removed from alpha-testers group), for our surprise, those users are still seeing our alpha version (which they shouldn't have access) on Google Play (or redirected from URL that is given by Google Play console). Any help about what's going on here would be appreciated! :)

High level automated function verification test on whole application, before automated testing?

As part of the continuous deployment process I'm now beginning to wonder if it would be better to run a high level FVT first across whole application after a new build (testing core features work) before going deeper? If core features broke, missing, don't open etc abort rebuild until they do.

Is this efficient?

JUnit test difference between assertEquals() and Assert.assertEquals()

I made a method to count occurrence of a given character in String.

public Integer numberOf(String str, Character a){}

I tried to test as normal using:

@Test
public void test1(){
    Integer result = oc.numberOf("Lungimirante", 'u');
    Assert.assertEquals(1, result);
}

but Eclipse complains it.

I googled and I found that to test it I needed use:

assertEquals(1, result); //it works correctly

instead of: Assert.assertEquals(1, result);

Could you explain me why? What is the difference?

How to test methods with attributes with asp.net core 2?

My project is an ASP.NET Core 2.0 web application.

How can I test if my own written attribute is working fine by calling a method containing the attribute?

For example:

Attribute

[AttributeUsage(AttributeTargets.Method)]
public class ValidateUserLoggedInAttribute : ActionFilterAttribute
{
  public override void OnActionExecuting(ActionExecutingContext context)
  {
     var controller = (BaseController)context.Controller;
     if (!controller.UserRepository.IsUserLoggedIn)
     {
        var routeValueForLogin = new RouteValueDictionary(new { action = "Login", controller = "Home", area = "" });
        context.Result = new RedirectToRouteResult(routeValueForLogin);
     }
  }
}

Controller:

[ValidateUserLoggedIn]
public IActionResult Start()
{
    ...
}

Test:

[TestMethod]
public void Test()
{
    // Act
    var result = this.controllerUnderTest.Start() as RedirectToRouteResult;

    // Assert
     Assert.IsNotNull(result);
     Assert.IsFalse(result.Permanent);

     var routeValues = result.RouteValues;

     const string ControllerKey = "controller";
     Assert.IsTrue(routeValues.ContainsKey(ControllerKey));
     Assert.AreEqual("Home", routeValues[ControllerKey]);

     const string ActionKey = "action";
     Assert.IsTrue(routeValues.ContainsKey(ActionKey));
     Assert.AreEqual("Login", routeValues[ActionKey]);
}

I have already written a test only for the attribute by creating an ActionExecutingContext, but I also want to test it for the controller method.

No qualifying bean of type found for dependency in test class

I have a spring boot application that has a test class as following:

@ContextConfiguration(value = {"classpath:integration/flex-allowcation-assignment-test.xml"})
public class FlexAllowanceAssignmentServiceIntegrationTest extends AbstractSecurityAwareIntegrationTest {

    @Autowired
    private FlexAllowanceAssignmentTestUtils assignmentEndpoint;

    @Test
    public void shouldReturnFailureResponse() {
        // Given
        String message = randomString();
        UserStub user = userBuilder().build();

        // When
        FlexAllowanceServiceResponse response = assignmentEndpoint.assign(message, user);

        // Then
        assertThat(response).isEqualTo(FAILURE);
    }
}

, The FlexAllowanceAssignmentTestUtils class is:

public class FlexAllowanceAssignmentTestUtils {

    private final FlexAllowanceAssignmentGateway flexAllowanceAssignmentGateway;
    private final JacksonWrapper jacksonWrapper;

    @Autowired
    public FlexAllowanceAssignmentTestUtils(FlexAllowanceAssignmentGateway flexAllowanceAssignmentGateway, JacksonWrapper jacksonWrapper) {
        this.flexAllowanceAssignmentGateway = flexAllowanceAssignmentGateway;
        this.jacksonWrapper = jacksonWrapper;
    }

    public FlexAllowanceServiceResponse assign(String message, UserStub user) {
        Message<String> requestMessage = MessageBuilder
                .withPayload(message)
                .build();
        Message<?> response = flexAllowanceAssignmentGateway.assign(enhanceWithSecurityContext(requestMessage, user));
        String payload = (String) response.getPayload();
        return jacksonWrapper.fromJson(payload, FlexAllowanceServiceResponse.class);
    }
}

, and the flex-allowcation-assignment-test.xml file as below:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://ift.tt/GArMu6"
       xmlns:xsi="http://ift.tt/ra1lAU"
       xmlns:int="http://ift.tt/1eF5VEE"
       xmlns:int-jms="http://ift.tt/TAOyvj"
       xsi:schemaLocation="http://ift.tt/GArMu6 http://ift.tt/1jdM0fG
       http://ift.tt/1eF5VEE http://ift.tt/1eF5VEQ
       http://ift.tt/TAOyvj http://ift.tt/1qGwG2k">

    <int-jms:outbound-gateway
            request-destination-name="UPL.EMPLOYEE.FLEX-ALLOWANCE.ASSIGNMENT.1.REQ"
            request-channel="flexAllowanceAssignmentRequestChannel"
            receive-timeout="60000"/>

    <int:channel id="flexAllowanceAssignmentRequestChannel"/>

    <int:gateway id="flexAllowanceAssignmentGateway"
                 service-interface="com.orbitbenefits.benefitsselection.server.employee.flexallowance.FlexAllowanceAssignmentGateway"
                 default-request-channel="flexAllowanceAssignmentRequestChannel"/>

    <bean class="com.orbitbenefits.benefitsselection.server.employee.flexallowance.FlexAllowanceAssignmentTestUtils"/>

</beans>

When trying to run the FlexAllowanceAssignmentServiceIntegrationTest, I get the following exception:

org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'com.orbitbenefits.benefitsselection.server.employee.flexallowance.FlexAllowanceAssignmentServiceIntegrationTest': Injection of autowired dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire field: private com.orbitbenefits.benefitsselection.server.employee.flexallowance.FlexAllowanceAssignmentTestUtils com.orbitbenefits.benefitsselection.server.employee.flexallowance.FlexAllowanceAssignmentServiceIntegrationTest.assignmentEndpoint; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type [com.orbitbenefits.benefitsselection.server.employee.flexallowance.FlexAllowanceAssignmentTestUtils] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {@org.springframework.beans.factory.annotation.Autowired(required=true), @org.springframework.beans.factory.annotation.Qualifier(value=assignmentEndpoint)}
...
Caused by: org.springframework.beans.factory.BeanCreationException: Could not autowire field: private com.orbitbenefits.benefitsselection.server.employee.flexallowance.FlexAllowanceAssignmentTestUtils com.orbitbenefits.benefitsselection.server.employee.flexallowance.FlexAllowanceAssignmentServiceIntegrationTest.assignmentEndpoint; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type [com.orbitbenefits.benefitsselection.server.employee.flexallowance.FlexAllowanceAssignmentTestUtils] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {@org.springframework.beans.factory.annotation.Autowired(required=true), @org.springframework.beans.factory.annotation.Qualifier(value=assignmentEndpoint)}
    at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:561)
    at org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:88)
    at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:331)
    ... 25 more
Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type [com.orbitbenefits.benefitsselection.server.employee.flexallowance.FlexAllowanceAssignmentTestUtils] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {@org.springframework.beans.factory.annotation.Autowired(required=true), @org.springframework.beans.factory.annotation.Qualifier(value=assignmentEndpoint)}
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.raiseNoSuchBeanDefinitionException(DefaultListableBeanFactory.java:1301)
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1047)
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:942)
    at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:533)
    ... 27 more

How come it can not see the bean declared in the config class- or shall I better declare that elsewhere? Or could that be failing because it doesn't have dependencies (FlexAllowanceAssignmentGateway and JacksonWrapper)?