mardi 31 octobre 2017

(Python) How to test if a string is contained in a file

I'm trying to create a username database and the code all runs how I want except for when I try to test if the file contains the entered text. For some reason it just runs past the elif statement and hits the else statement if it already exists in the text.

Here is the full code:

class username:
    def add():
        while(True):
            file = open("usernames.txt", 'r+')
            names = list(file.read())
            entered = str(input("Username: "))
            if len(entered) == 0:
                print("Please enter a valid username\n")
                continue
            elif not(entered.isalpha):
                print("Please enter a valid username\n")
                continue
            elif entered in file.read():
                print("That name is already in use.",
                      "\nPlease enter a different username. \n")
                continue
            elif len(entered) < 4 or len(entered) > 12:
                print("Please enter a username with 4-12 characters.\n")
                continue
            else:
                print(entered)
                file.close()
                add = open("usernames.txt", 'a+')
                plusnewline = entered + "\n"
                add.write(plusnewline)
                add.close()
                break

    def list():
        file = open("usernames.txt","r+")
        names = file.read()
        print(names)
        file.close()

username.add()
username.list()

edit: Answered by Shadow:

Changed:

names = list(file.read())

to:

names = file.read()

and changed:

elif entered in file.read():

to:

elif entered in names:

How to keep only Alpha version on Play store

My android app is live on Google Play. The latest version is 22. I had not used alpha and beta testing so far. Now App Owner said that Unpublish app from app store So that nobody that can view app any platform PC, mobile etc.

He said only selective people can view this app. So I decided Alpha testing using Google plus community. But problem is that if I unpublish app, Link of Alpha testing is not showing the app in play store for newly added users in the community even they become the tester. It's only working Users who have downloaded the app. If I republish app then it is working for ALL.

So my main question is How does Alpha testing possible for new users of google plus community without publishing App?

If I want to change the route in a test controllers file, do I have to edit any other files?

I want to change the route in this file from users_new_url to signup_path:

require 'test_helper'

class UsersControllerTest < ActionDispatch::IntegrationTest
  test "should get new" do
    get users_new_url
    assert_response :success
  end

end

I have tried simply replacing it,

get signup_path

but when I run rails test, it always says that signup_path is an "undefined local variable or method". Do I need to edit other files?

If it helps, the following code is from test/controllers/users_controller_test.rb.

r - How to find the sensitivity of a Monte carlo test?

I have written a code which outputs a p-value of a given test from n samples and m replicates. How would I find out how the sensitivity of the test changes as a function of m?

sensitivity<- function(m) 

I am unsure how sensitivity is derived from a sample and related to p-value? I have only ever looked at this in table forms (or matrices) - is there a method to calculate it? (without built in R functions)

Should I Use It.Is

So I'm a bit new to this testing business, and using the Moq library. I'm wondering about the use of the It.Is() method.

Say I have a class I'd like to put under test:

public class ZeroChecker
{
    public bool IsNotZero(int myInt)
    {
        return myInt != 0;
    }
}

So I make a corresponding class to test:

public class ZeroCheckerTest
{
    [Fact]
    public void IsNotZero_ReturnsTrue_WhenInputIsNotZero()
    {
        //Arrange
        var myInt = It.Is<int>(i => i != 0);
        ZeroChecker target = new ZeroChecker();

        //Act
        bool actual = target.IsNotZero(myInt);

        //Assert
        Assert.True(actual);
    }
}

However, my test fails! When I look in the debugger, I notice that myInt is set to zero!

So I'm considering that either:

1) I'm dumb, and this is not how one should use It.Is()

2) There's a bug in Moq

And in either case, how would I go about testing the above scenario? Switch to [InlineData()] and throw in a handful of non-zero ints, I suppose?

How to test an action creator with a fetch API request

I have a create-react-app with Redux and Jest for testing. I am using fetch to make the API request from an action creator. As I am writing tests I found that for what ever reason – anyway I am trying to mock this request - it is still trying to make the actual request to the API and getting a Network Connection Error (so I get the catch from the request each time when test). It might be worth mentioning that this works fine in the browser.

import fetch from 'isomorphic-fetch';
import { SUBMIT_SUCCESS, EMPLOYEE_DETAILS, ERROR } from './actionTypes';
import { API_URL } from '../../Constants';

 export function getEmployeeDetails(id) {
   return async dispatch => {
    try {
      let payload = await (await fetch(`${API_URL}/users/${id}`, {
        method: 'GET',
        headers: { jwt: localStorage.getItem('jwt') },
      })).json();

      dispatch(getEmployeeDetailsSuccess(payload.user));
    } catch (err) {
      dispatch(errorCatch(err));
    }
  };
}

I have tried this - http://ift.tt/2l2sRpt

I have also tired this - http://ift.tt/2zUVedH

And basically spent a couple of days trying to work on this.

What am I doing wrong? Any chance to get a working example of how to test this action?

Thank you!

setupTestFrameworkScriptFile is not supported error

Hello I am trying to use Jest+Enzyme to test my react components. I am not using create react app so I find the error I am getting very strange. Does anybody know what I am doing wrong here?

Here is the error:

Out of the box, Create React App only supports overriding these Jest options:

• collectCoverageFrom • coverageReporters • coverageThreshold • snapshotSerializers.

These options in your package.json Jest configuration are not currently supported by Create React App:

• setupTestFrameworkScriptFile

If you wish to override other Jest options, you need to eject from the default setup. You can do so by running npm run eject but remember that this is a one-way operation. You may also file an issue with Create React App to discuss supporting more options out of the box.

This is my test-setup.js

import { configure } from 'enzyme';
import Adapter from 'enzyme-adapter-react-15';

configure({ adapter: new Adapter() });

This is my package.json

 {
      "name": "duplo-plugin-starter-react",
      "version": "1.0.0",
      "description": "",
      "main": "index.js",
      "directories": {
        "doc": "docs"
      },
      "jest": {
        "setupTestFrameworkScriptFile": "<rootDir>/test-setup.js"
      },
      "scripts": {
        "test": "react-scripts-ts test --env=jsdom",
        "build": "webpack",
        "start": "webpack-dev-server --progress --inline"
      },
      "repository": {
        "type": "git",
        "url": "git@github.corp.dyndns.com:vcharlesthompson/duplo-plugin-starter-react.git"
      },
      "author": "",
      "license": "ISC",
      "dependencies": {
        "@types/react": "^16.0.18",
        "@types/react-dom": "^16.0.2",
        "react": "^16.0.0",
        "react-dom": "^16.0.0",
        "source-map-loader": "^0.2.3"
      },
      "devDependencies": {
        "@types/enzyme": "^3.1.1",
        "@types/jest": "^21.1.5",
        "awesome-typescript-loader": "^3.2.3",
        "css-loader": "^0.28.7",
        "enzyme": "^3.1.0",
        "enzyme-adapter-react-16": "^1.0.2",
        "less": "^2.7.3",
        "less-loader": "^4.0.5",
        "react-addons-test-utils": "^15.6.2",
        "react-scripts-ts": "^2.8.0",
        "react-test-renderer": "^16.0.0",
        "style-loader": "^0.19.0",
        "typescript": "^2.5.3",
        "webpack": "^3.8.1",
        "webpack-dev-server": "^2.9.3"
      }
 }

Multiple categories for nunit through testfixture and test attributes

[TestFixture(Category = "P1")]
public class TestClass 
{
    [Test(Category = "P2")]
    public void Method()
    {
        //test
    }
}

In the above code snippet, what will be considered the TestCategory of Method : "P1" or "P2" or both?

I want to use the category to filter out the tests.

Bash testing -n

I thought the -z option in a bash test would be true if the condition evaluates to null and -n would be true if the condition is not null. So then why is this happening?

var1=
echo ${#var1} # 0

[ -n $var1 ] && echo 'hi' # hi
unset var1
[ -n $var1 ] && echo 'hi' # still prints hi!

Unsolved Problems with Data Analysis

What are some of the most annoying problems you find while analyzing data?

Is there anything you wish would have a simple fix?

What problems do people often not take into account when it comes to data analysis that they should be?

Is there any other issues you can think of that should be made aware of?

Laravel Dusk LoginAs doesnt work

I have such a Dusk test:

namespace Tests\Browser\Features\ContentAdministratorSubsystem;

use App;
use App\Model\Data\Models\Account;
use Illuminate\Foundation\Testing\DatabaseMigrations;
use Illuminate\Support\Facades\Session;
use Laravel\Dusk\Browser;
use Tests\DuskTestCase;

class ApproveCommentTest extends DuskTestCase
{

protected $user;

public function setUp() 
{

    parent::setUp();    

}

public function tearDown()
{

    parent::tearDown();

}

/** @test */
public function when_user_confirms_approve_comment_action_its_box_dissappears()
{

    $this->user = factory(Account::class)->create(['status' => 'confirmed', 'type' => 'Content Administrator']);

    $this->browse(function ($first, $second) {
        $first->loginAs(Account::find($this->user->id))->visit(env('APP_URL').'/content-administrator')->assertSee('Sorry, the page you are looking for could not be found.');
    });

}

}

Its my first time I'm testing with Dusk, so all I want to do is to login and visit Content Administrator Page.

So that means that this test should fail. Cause that message can only be seen if something went wrong. I'm 100% sure that env('APP_URL').'/content-administrator' is a correct url, I used dd() to see what it looks like and it is OK. I'm also sure that $this->user->id returns a real ID.

Route::prefix('content-administrator')->middleware('contentAdministrator')->namespace('ContentAdministratorSubsystem')->group(function () {
    Route::get('/', 'MainPageController@main');

    Route::get('/comments/delete/{id}', 'CommentController@delete');
    Route::get('/comments/approve/{id}', 'CommentController@approve');
});

This is my routes file. I have a middleware, but now it does nothing. Just passes request to $next.

When I tried to dd($first->loginAs($this->user->id)) I got this result:

Laravel\Dusk\Browser {#90
  +driver: Facebook\WebDriver\Remote\RemoteWebDriver {#89
    #executor: Facebook\WebDriver\Remote\HttpCommandExecutor {#259
      #url: "http://localhost:9515"
      #curl: curl resource {@397
        url: "http://localhost:9515/session/007a6ed618f11c93335f5710c834b8a1/url"
        content_type: "application/json; charset=utf-8"
        http_code: 200
        header_size: 102
        request_size: 230
        filetime: -1
        ssl_verify_result: 0
        redirect_count: 0
        total_time: 0.469234
        namelookup_time: 0.000219
        connect_time: 0.000403
        pretransfer_time: 0.000611
        size_upload: 53.0
        size_download: 72.0
        speed_download: 72.0
        speed_upload: 53.0
        download_content_length: 72.0
        upload_content_length: 53.0
        starttransfer_time: 0.469201
        redirect_time: 0.0
        redirect_url: ""
        primary_ip: "127.0.0.1"
        certinfo: []
        primary_port: 9515
        local_ip: "127.0.0.1"
        local_port: 50024
      }
    }
    #capabilities: Facebook\WebDriver\Remote\DesiredCapabilities {#91
      -capabilities: array:23 [
        "acceptSslCerts" => true
        "applicationCacheEnabled" => false
        "browserConnectionEnabled" => false
        "browserName" => "chrome"
        "chrome" => array:2 [
          "chromedriverVersion" => "2.32.498513 (2c63aa53b2c658de596ed550eb5267ec5967b351)"
          "userDataDir" => "/tmp/.org.chromium.Chromium.JFlr8n"
        ]
        "cssSelectorsEnabled" => true
        "databaseEnabled" => false
        "handlesAlerts" => true
        "hasTouchScreen" => false
        "javascriptEnabled" => true
        "locationContextEnabled" => true
        "mobileEmulationEnabled" => false
        "nativeEvents" => true
        "networkConnectionEnabled" => false
        "pageLoadStrategy" => "normal"
        "platform" => "Linux"
        "rotatable" => false
        "setWindowRect" => true
        "takesHeapSnapshot" => true
        "takesScreenshot" => true
        "unexpectedAlertBehaviour" => ""
        "version" => "62.0.3202.75"
        "webStorageEnabled" => true
      ]
    }
    #sessionID: "007a6ed618f11c93335f5710c834b8a1"
    #mouse: null
    #keyboard: null
    #touch: null
    #executeMethod: Facebook\WebDriver\Remote\RemoteExecuteMethod {#77
      -driver: Facebook\WebDriver\Remote\RemoteWebDriver {#89}
    }
  }
  +resolver: Laravel\Dusk\ElementResolver {#87
    +driver: Facebook\WebDriver\Remote\RemoteWebDriver {#89}
    +prefix: "body"
    +elements: []
    #buttonFinders: array:5 [
      0 => "findById"
      1 => "findButtonBySelector"
      2 => "findButtonByName"
      3 => "findButtonByValue"
      4 => "findButtonByText"
    ]
  }
  +page: null
  +component: null
}

Im not sure whether it means that it has logged in successfully.

I also get this error.log file:

[
    {
        "level": "SEVERE",
        "message": "http:\/\/mydomain\/ - Failed to load resource: the server responded with a status of 404 (Not Found)",
        "source": "network",
        "timestamp": 1509463678547
    }
]

That "mydomain" in real life is a correct value, I just replaced original one with this in this example to not show the domain.

So why this test succeeds when it shouldn't?

Surefire test configured for active profile no executed when "Test Package" in Netbeans

My project contain unit and integration tests. As of the nature of integration tests, they take notable longer as a unit test, which is why I not want to execute them every time. Therefor I've created two maven profiles:

  1. unit-tests: only executing unit test (.*Test.java)
  2. all-tests: executing all tests including integration tests (.*IT.java)

In my pom.xml, this configuration looks like this:

<profile>
    <id>unit-tests</id>
</profile>
<profile>
    <id>all-tests</id>
    <build>
    <plugins>
        <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-surefire-plugin</artifactId>
        <configuration>
            <includes>
            <include>**/*IT.java</include>
            </includes>
        </configuration>
        </plugin>
    </plugins>
    </build>
</profile>

When testing (Hitting Test on the project root context menu in Netbeans) or building the whole project, the test execution happens as I expect for each profile.

If I select a single package and run Test Package, only the unit test are executed, even if the current active profile is all-tests. Why is that?

TFS 2013 adding tester to a project

I would like to ask a little help to Team Foundation Server 2013. I have created a test plan with test cases and I would like to add a user to this plan as a tester but she does not display in the Assign list of Testing Center. Where can I add this user and what role to the TFS project, please?

I want this user to test the project in Visual Studio Team Foundation Server 2013 web app.

WebdriverIO - How to use mocha list reporter?

I am using WebdriverIO as a e2e testing framework, together with mocha and chai. However, I would like to have a reporter that gives more details for the results instead of only pass/fail in the end. Right now when I get a timeout error, I don't know which step in the test it was that failed. I used nightwatch before and I liked that reporter, and it seems like they used mocha list reporter.

However, there does not seem to be a plugin like wdio-list-reporter in npm, so is there another way to add it? Or another reporter that is similar and reports every step of the test and what the error is.

Timeout of 30000ms exceeded. Try to reduce the run time or increase your timeout for test specs

For example, I get the above in Webdriver IO now, but in nightwatch I got information about where the test failed. For example, if it timed out while waiting for the NAVBAR or whatever.

Automatic test on PRODUCTION environment

We have a web application that is used by end-users to sign documents. In order to send the documents using 2FA we send a PIN code to the mobile phone of the end-user. Also when we validate the PIN for the signature we send an e-mail with an attachment to the end-user who signed the document and also in copy to his manager or to the Head Office (central systems). This procedure is for PRODUCTION.

We have an automated test in order to periodically test that everything is okay in production environment, and my team has something like this:

CODE:

$cc = "central@headoffice.com"

$destination = "test@email.com" // this is a test destination e-mail that we can check in order to see if we receive the email of the transaction

if ($destination == "test@email.com") {
    $cc = "" // if we are running a test over the PRD environment we don't want to send a copy of the transactional e-mail to the customer
}

Now, I don't like the part of having a hard-coded "if", I prefer to run the test using a special flag / ENV variable (for instance ENV="TEST_PRD") that we can use in order to load a special configuration.

We use PHP5.

I think something like the documentation of "Rails testing" or "Angular testing" can help, but maybe you can help us with some insight.

Thanks

Unit test for bot created with botbuilder and nodejs

Can someone guide me on writing test scripts for bot developed with botbuilder using nodejs and api.ai for recognizer.

Docs in http://ift.tt/2xD29am doesn't seem to be clear.

Can someone provide Step by step procedure for testing bot's dialog flow and advise how to automate testing for CI?

Siesta Test Class Gives 'Maximum call stack size exceeded' error

I've created a Test Class which includes some iterative actions; such as Login and opening menu item through Menu. The error exist during opening menu item:

Test threw an exception
RangeError: Maximum call stack size exceeded
    at line 13760, character 45, of http://localhost:1841/app/test/Siesta/resources/js/siesta-all.js


Here is Test Class and I guess the error occurs through those multiple methods;

Class('Siesta.Test.OWeb', {

    isa     : Siesta.Test.ExtJS,

    methods: {
        login : function(callback){
            var t = this;

            this.chain(
                { waitForCQ : 'window[title=Login]' },

                function(next) {
                    t.cq1('>> textfield[name=username]').setValue('user@adress.com');
                    t.cq1('>> textfield[name=password]').setValue('password');
                    next();
                },

                {click: '>> button[text=Submit]'},
                {waitForCQNotVisible: 'window[title=Login]', desc: 'Submit process is succeed!'},

                callback
            )
        },

        firstMenuItem: function (callback) {
            var t = this;

            t.chain(
                {waitForCQ: 'treelist[itemId=navigationTreeList]'},
                {click: '>> treelistitem[_text=First]'},
                {click: '>> treelistitem[_text=MenuItem]', desc: 'First Menu Item is displaying now.'},

                callback
            )
        },

        secondMenuItem: function (callback) {
            var t = this;

            this.chain(
                {waitForCQ: 'treelist[itemId=navigationTreeList]'},
                {click: '>> treelistitem[_text=Second]'},
                {click: '>> treelistitem[_text=MenuItem]', desc: 'Second Menu Item is displaying now.'},

                callback
            )
        },
        // Keep going like this for 5-6 times...
}

and I'm calling TestClass for each menu item sanity file this way;

describe('CRUD Tests: Login', function (t) {
    t.it('Should extend the Test Class and Login to App', function (t) {
        t.chain(
            {
                login: t.next
            }
        )
    });

    t.it('Should open First Menu Item', function (t) {
        t.chain(
            {
                firstMenuItem: t
            }
        )
    });
});


Normally Test Class works with 2 methods but as i told after rearranged for multiple menu item, it threw this error. Is it possible to arrange methods with an array and implement each menu item as a object;

methods: [
  {firstMenuItem: func () ... },
  {secondMenuItem: func () ... },
  //....
] 

and How to call array object on Menu item sanity file? I've tried but it couldn't extend Test Class... or any other idea?

python aseertJSONEqual see output with 'info' and "info" as different output

when I print expected_order_data it gives:

{'rating': 0.0, 'first_name': 'Debbie', 'last_name': 'Graves', 'user_id': 1, 'car': {'photo': [], 'make': 'Nissan', 'plates_number': '', 'model': 'March', 'year': ''}, 'avatar': 'path', 'cab_number': '1'}

response.content gives:

{"rating": 0.0, "first_name": "Debbie", "last_name": "Graves", "user_id": 1, "car": {"photo": [], "make": "Nissan", "plates_number": "", "model": "March", "year": ""}, "avatar": "path", "cab_number": "1"}

But when I do

self.assertJSONEqual(response.content, expected_order_data)

I got TypeError: expected string or buffer mistake

how can I verify response against a predefined json schema in karate?

Currently for checking answer response IO use below method:

And match response ==
"""
  {
    "status":#number,
    "message":#string
  }
"""

Is there any way to do like below?

And match response == someJsonSchemaDefinedInKarateConfigFile

lundi 30 octobre 2017

Branching test of an ES6 generator function

Suppose we have the following generator function:

function *myGenerator(payload) {
  try {
    yield call(someFunction, payload)

    yield call(success)
  }
  catch (err) {
    yield call(error)
  }
}

I know I can test this function like so (using Mocha):

describe('myGenerator', () => {
  const iter = myGenerator({ prop1: 0 })

  let next = null
  it('should call someFunction', () => {
    next = iter.next()
    expect(next.value).to.deep.equal(call(someFunction, { prop1: 0 }))
  })
  it('should call success', () => {
    next = iter.next()
    expect(next.value).to.deep.equal(call(success))
  })
})

describe('myGenerator when an error occurs', () => {
   const iter = myGenerator({ prop1: 0 })

  let next = null
  it('should call someFunction', () => {
    next = iter.next()
    expect(next.value).to.deep.equal(call(someFunction, { prop1: 0 }))
  })
  it('should call error', () => {
    next = iter.throw('some error')
    expect(next.value).to.deep.equal(call(error))
  })
})

The question is: how can I combine the first part of these tests? Notice that the assertion on calling someFunction is duplicated.

How to read the first n character of a string without reading the entire sting in javascripts?

I am testing and rewriting my input validation functions. I though that maybe some users somehow provide huge strings and cause problems (?), so I want my functions to read only the first n characters of the string. I do not want to read the entire string.

If I get the first n characters in this way:

var couldbehuge = $('#inputbox')
var onlyapart = couldbehuge.slice(0:20)

It looks like that I am temporary storing a user input. Then, if I do this:

var onlyapart = $('#inputbox').slice(0:20)

Is is safer because I am not storing the user input in a variable?

Also, I can read the string character by character:

var i, letter;
var onlyapart ="";
for(i=0; i<20; i++){
  letter = $('#inputbox')[i]
  onlyapart += letter
}

Any better idea?

Testing OAuth in controllers

I'm using the Google calendar API. This particular way I'm using requires OAuth, and to be honest I'm not that familiar with it. But I am trying work on my TDD. What are the best ways to test OAuth interactions?

If I simply make an http request: http://ift.tt/1995sls?

I get this JSON back form Google's api:

{
 "error": {
  "errors": [
   {
    "domain": "global",
    "reason": "required",
    "message": "Login Required",
    "locationType": "header",
    "location": "Authorization"
   }
  ],
  "code": 401,
  "message": "Login Required"
 }
}

This totally makes sense because I have authorized who's calendars I'm getting back. I can handle this in code just fine. But mocking it out in a controller test I can't seem to figure out how to do that.

All this being said. I'm not looking for an answer because I understand that this question is a bit vague. I'm simply looking to be pointed towards some good blog posts or best practices when it comes to mocking OAuth. Thanks for your help!

Is there a free demo(test) opensource URL for Apache Jmeter?

I need an opensource URL for demonstrating Jmeter for a software testing lecture. I can't seem to find an easy to use demo url on the web.

Java - How to test a Method which has Future calls?

I have following code and I am trying to write test for it:

In my class (MyService) I have following method getItemBySiteUuidAndItemUuids which invokes a service in two different threads and gets the results.

@Override
public void getItemBySiteUuidAndItemUuids(String param1, String param2) {
ExecutorService executorService = Executors.newFixedThreadPool(2);

Future<String> future1 = fetchData1(executorService, param1);
Future<String> future2 = fetchData2(executorService, param1, param2);
try {
  String dat1 = future1.get();
  String data2 = future2.get();
} catch (InterruptedException | ExecutionException e) {
  e.printStackTrace();
  }
}

private Future<List<ItemMaster>> fetchData1(
  ExecutorService executorService, String param1) {
return executorService.submit(() -> {
    return externalService.getData(param1);
  });
}

private Future<List<ItemMaster>> fetchData2(
  ExecutorService executorService, String param1, String param2) {
return executorService.submit(() -> {
    return externalService.getData(param1, param2);
  });
}

In my test class, I mocked externalService (which is injected in this class) and defined mock behavior:

@Test
public void givenValidSiteUuidAndItemUuids_getItemBySiteUuidAndItemUuids_givesItemData() {
Mockito.when(externalService.getData(Mockito.anyString())).thenReturn(getMockData());

String data = myService.getData(
    "mockParam1", "mockParam2");
assertThat(data, is(expectedData));

}

Now when the test runs, it fails with a NullPointerException when it tries to get future1.get().

I am not able to understand how to get past this. Any pointers will help.

Thanks.

Is path condition satisfiablility check a verification approach?

If I ask user about are these conditions true or not. Then check (With constraint solver) whether the conjunction of path conditions satisfiable, Can this method is considered as the verification approach? Or only when I detect whether the errors exist, The approach is considered as the verification approach?

image.picker : how to initialize edit form with corresponding image?

With the folliwing code I get this wrong result : nose.proxy.AssertionError: 302 != 200 : Couldn't retrieve redirection page '/mes_dossiers/': response code was 302 (expected 200)

what is wrong with my code ?

from django.test import TestCase, RequestFactory, Client
from ..models import *
from ..views import *
from django.core.management import call_command

class Cas(TestCase):

    def setUp(self):
        call_command('loaddata', 'fixture_users.json', verbosity=1)
        call_command('loaddata', 'dclicapp_tests_appareils_isoles.yaml', 
        verbosity=1)

    def test_dossier_duplicate(self) :
        request = self.factory.get('/dossier/3/copier/', follow = True)
        request.user = User.objects.get(id=3)
        pk = 3
        response = dossier_duplicate(request, pk)
        response.client = Client()
        self.assertRedirects(response,'/mes_dossiers/',status_code=302, 
        target_status_code=200)

How to run instumentation in specific process

Hot to specify a process in which instrumentation should run?

My Activity running in <applicationId>:processName process. But when I try to start it from my simple Espresso test I have the error:

Caused by: java.lang.RuntimeException: Intent in process <applicationId> resolved to different process <applicationId>:processName: Intent { act=android.intent.action.MAIN flg=0x14000000 cmp=applicationId/.LoginActivity }

There is multiprocess test in Espresso 3+. But It works only on Android 26. And I don't need to communicate between processes. I need test only one activity which runs in the specific process.

How to create web service receive point to call outbound service using SoapUI 5.3.0 tool?

I start my testing job with inbound services. Everything was fine, I call created WSDL, put parameters in request and recieve SoapUI WS respond.

Now I need to test also outbound service, but to this job, I need to create system or point which recieve this message.

Is it possible with SoapUI tool? How to do that?

Can anyone help me? I`m new one with testing outbound services.

Jasmine failed tests automation

Is there any automation that I can use when a test fails on jasmine to prepare the same environment for a specific failing test? I'm using karma as the spec runner.

For example:

describe("this plugin", function(){
    beforeEach(function(){
        $("body").append("<input type='text' id='myplugintester'>");
        this.plugin = $("#myplugintester").plugin({
             cond1: true,
             cond2: new Date(),
             condetc: null
        });
    }
    afterEach(function(){
         $("#myplugintester").data("plugin").destroy();
         $("#myplugintester").remove();
    }
    it("should show the correct value", function(){
         expect(this.plugin.value).toEqual("somevalue");
    });
    it("should display 'disabled' when cond3 is not null", function(){
         this.plugin.cond3 = "blabla";
         expect(this.plugin.value).toEqual("somevalue");
    });
 });

When the second case fails, I have to write this to a test page to debug what goes wrong with the code.

$("body").append("<input type='text' id='myplugintester'>");
this.plugin = $("#myplugintester").plugin({
     cond1: true,
     cond2: new Date(),
     condetc: null
});
this.plugin.cond3 = "blabla";
console.log(this.plugin.value);
$("#myplugintester").data("plugin").destroy();
$("#myplugintester").remove();

Does any node package automate this? Or how do other developers react in this cases?

Note: I switched from grunt-jasmine to grunt-karma because of the speed. grunt-jasmine allowed me to run single test cases on browsers which I could debug with Chrome Developer Tools. I looked for several html reporters for karma but they only state the result on output HTML. They are not running the specs which I can interrupt and debug.

Jarque - Bera test for all columns in R

I´m trying to run the "Jarque - Bera" test for normality in R. I have a dataset with 30 time series and would like to run a test for each column since the time series har independent.

What I have tried so far:

> head(PF_ret)
           ATCOA SS Equity ATCOB SS Equity ELUXB SS Equity HMB SS Equity INDUC SS Equity INVEB SS Equity
1990-03-30    -0.037661051     -0.07646475     -0.04595186   0.008474576      0.01250000     -0.04595805
1990-04-30     0.006179197      0.02365591      0.06001529   0.008403361      0.06172840      0.02420949
1990-05-31     0.114636643      0.08823529     -0.07573026   0.108333333      0.19209302      0.05885191
1990-06-29     0.044077135      0.04343629      0.06554819   0.090225564     -0.04877097      0.05558087
1990-07-31    -0.014072120     -0.02127660      0.00000000   0.137931034      0.02543068      0.00000000
1990-08-31    -0.285459411     -0.25425331     -0.28451117  -0.151515152     -0.08000000     -0.21061718

library(tseries)


> jarque.bera.test(PF_ret[,1])

    Jarque Bera Test

data:  PF_ret[, 1]
X-squared = 24.465, df = 2, p-value = 4.87e-06

I know that [,1] means that we are looking at the first column in the dataset. But even if I try to define the test so that the code will run the test separately for each column I fail. I have tried to run the test for the columns above and expect R to write out 6 results for the JB-test.

> jarque.bera.test(PF_ret[,1:6])
Error in jarque.bera.test(PF_ret[, 1:6]) : 
  x is not a vector or univariate time series

Can anyone help me with this issue?

Post Deployment Integration Testing in Mule

I am having an application for which I have an Integration Test suite written using Munit. I am deploying this to CloudHub Using Jenkins. How can I execute the test cases post-deployment? Is there any command line tool that can I make use of or it can done using maven or Jenkins? Kindly give me some inputs on this. Thanks in Advance!

Does Firebase Support Appium tests

We are using Appium test to test our android apps. As I was researching about the cloud test labs, I didn't find any documentation regarding Firebase database supporting appium tests.

  1. So, does Firebase support Appium tests?

Unable to run Cucumber Tests using Tags through Maven

when i run cucumber tests using "Run as Junit Tests", the tests runs properly with proper tags.

also when i run cucumber tests using "Maven", the tests runs properly with proper tags,provided i have mentioned the tags in runner class.

@Cucumber.Options(format={"pretty", html:target/cucumber","json:target/cucumber.json"},tags= {"@smokeTest"})

But i want to be able to provide tags as argument to mvn test command,to run the test cases and i am making use of following command.

mvn test -Dcucumber.options="--tags @tagTest"

But it runs all the test cases irrespective of my tags.

also when i use the command mvn test -Dcucumber.options="--tags @tagTest" i mention no tags in my runner class

@Cucumber.Options(format={"pretty",html:target/cucumber","json:target/cucumber.json"})

Please let me know where am i going wrong anywhere?

This is runnerTest code:

import org.junit.runner.RunWith;
import cucumber.junit.Cucumber; 

@RunWith(Cucumber.class)

@Cucumber.Options(format={"pretty", "html:target/cucumber","json:target/cucumber.json"})

public class runnerTest {

}

Attached

pom.xml

JaCoCo plugin for Jenkins not working as expected

I have a question/problem with the Build-over-build feature. I have this config in my pipeline:

jacoco buildOverBuild: true, deltaMethodCoverage: '1'

This code is generated by the pipeline syntax plugin when you mark the checkbox with the 'Fail the build if coverage degrades more than the delta thresholds' value.

And when I run a build log shows:

[JaCoCo plugin] Delta coverage: class: 0.0, method: -5.0, line: -1.989708, branch: 0.0, instruction: -1.413044, complexity: -4.545452 [JaCoCo plugin] Delta thresholds: JacocoHealthReportDeltaThresholds [deltaInstruction=0.0, deltaBranch=0.0, deltaComplexity=0.0, deltaLine=0.0, deltaMethod=1.0, deltaClass=0.0] [JaCoCo plugin] Results of delta thresholds check: FAILURE

But build is not failing and finish successfully. Any idea how to break the build if delta thresholds check FAILS as is happening?

Thanks!

AWS-Device Farm: remotely trigger AWS device farm script and send parameter to the same

AWS-Device Farm: Need to setup a monitoring system using AWS where I can use an api to trigger a script. The script should be able to receive input from the trigger as well for execution. Can anyone please assist me with this.

Will setting it up in Jenkins help?

dimanche 29 octobre 2017

How do you use fixtures in jasmine-jquery

Running into an issue with using jasmine-jquery to test my jquery.

enter image description here

enter image description here

I've verified I have jasmine, jquery and jasmine-jquery. Here is my scripts from my example.html file.

<script src="http://ift.tt/2uTh1V7"></script>
<script type="text/javascript" src="jasmine/lib/jasmine-2.8.0/jasmine.js"></script>
<script type="text/javascript" src="jasmine/lib/jasmine-2.8.0/jasmine-html.js"></script>
<script type="text/javascript" src="jasmine/lib/jasmine-2.8.0/boot.js"></script>
<script type='text/javascript' src='jasmine/lib/jasmine-2.8.0/jasmine-jquery.js'></script>

Here is my specfile:

describe("testing fixtures", function(){


it("tests three crucial functions", function(){
    expect(readFixtures).toBeDefined();
    expect(setFixtures).toBeDefined();
    expect(loadFixtures).toBeDefined();
  })
})

describe("testing example.html", function(){
  var fixture;
  beforeEach(function(){
    loadFixtures('example.html');
    fixture = $('#testExample');
    fixture.testExample();
  })

  it('punchcard should be defined', function(){
    expect(fixture).toBeDefined();
  })

});

I was trying to test the fixtures before using them in my specfile but it seems they are really not defined. But according to my scripts jasmine-jquery is there so those methods should be usable. Not sure what is going on or if there is something simple I am missing. Any insight would be greatly appreciated.

Thanks for looking!

Component testing Azure Functions

I am using Azure Functions backed by a SQL database using the Entity Framework.

To ensure code correctness, I would like to perform full automated component tests on my local machine.

Injecting a connection string which points to a local database solves the SQL server issue, but I am yet to find a way to spin up a local instance of Azure Functions programmatically. As VS exposes this behavior using the “Debug” functionality, it should theoretically possible.

How do I programmatically spin up a local version of Azure Functions using a project which is located in the same solutions as my testing project?

Jasmine test on javascript from file in another directory

I am trying to write a unit test suite for a javascript file that is located elsewhere in the project directory.

I cannot relocate that file or the spec file due to certain other dependencies, how can I import it?

here's how the project directory structure looks:

  • ./
  • ./js
  • ./js/module
  • ./js/module/MYFUNC.js
  • ...
  • ./spec/MYFUNC_tests.js

MYFUNC.js looks like this:

var constants = {a:100}
var myObj = function(){
    var local_var = true;

    myObj.doSomething = function(){
        return true;
    }
    myObj.answerToEverything = function(){
        return 42;
    }
}

This is my goal test case in MYFUNC_tests.js

describe("MYFUNC ", function(){
    var obj_a = myObj();
    it("Universe question ", function(){
        expect(obj_a.answerToEverything).toBe(42);
        done();
    }
});

Any help or links to guides on correct importing would be greatly appreciated!

Could 'Spring RESTful' + 'Java Config' + 'Tests' be used without Spring MVC?

Can't make tests working with such a configuration.

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration
@WebAppConfiguration
@TestPropertySource(value="classpath:app-test.properties")
public class LoginTest {

    @Autowired
    private WebApplicationContext context;

    MockMvc mockMvc;

    @InjectMocks
    private AuthController authController;

    @Test
    public void testOk() throws Exception {
        MockitoAnnotations.initMocks(this);
        mockMvc = MockMvcBuilders
                    .webAppContextSetup(context)
                    .apply(springSecurity())
                    .build();
        mockMvc.perform(get("/api/auth")
            .with(httpBasic("u", "p")))
            .andExpect(status().isOk());
    }

}

Getting error:

java.lang.IllegalStateException: springSecurityFilterChain cannot be null. Ensure a Bean with the name springSecurityFilterChain implementing Filter is present or inject the Filter to be used.

Test with Mockito

I have two classes : Service and Main. And i am trying to test it with mockito however was faced with a incorrect construct(i have read about must have interfaces) my programm or smth else. Can somebody help me with my beginning?

public class UniqueCharacterCounterService {
private  Map<String, Map<Character, Integer>> cache = new HashMap<>();

public Map<Character,Integer> countCharacters(final String inputData) throws IncorrectInputDataException {
    if (inputData == null) {
        throw new IncorrectInputDataException("The method parameter can't be null");
    }
    Map<Character, Integer> value = cache.get(inputData);
    Map<Character, Integer> result = new LinkedHashMap<>();
    if (value != null) {
        return value;
    }
    char symbol;
    Integer counter;
    char inputAsCharArray[] = inputData.toCharArray();
    for (int j = 0; j < inputAsCharArray.length; j++) {
        symbol = inputAsCharArray[j];
        counter = result.get(symbol);
        if (counter != null) {
            result.put(symbol, counter + 1);
        } else {
            result.put(symbol, 1);
        }
    }
    cache.put(inputData, result);
    return result;
}

public void printResult(Map<Character, Integer> input) {
    input.forEach((key, value) -> System.out.println("\"" + key + "\" " + "- " + value));
}

}

And Main

public class Main {
final static Logger logger = Logger.getLogger(Main.class);

public static void main(String[] args) throws IncorrectInputDataException {
    try {
        UniqueCharacterCounterService characterCounter = new UniqueCharacterCounterService();
        Map<Character, Integer> result = characterCounter.countCharacters("Hello");
        characterCounter.printResult(result);
        result = characterCounter.countCharacters("world");
        characterCounter.printResult(result);
        result = characterCounter.countCharacters("Hello");
        characterCounter.printResult(result);
    } catch (IncorrectInputDataException e) {
        logger.error(e.getMessage());
    }
}

}

Angular 2+ testing: fakeAsync with Inject

I am trying to test service with private method which is called in constructor and contains observable:

import {Injectable} from '@angular/core';
import {Observable} from 'rxjs/Observable';
import 'rxjs/add/observable/interval';

@Injectable()
export class SomeService {

  private booleanValue: boolean = false;

  constructor() {
    this.obsMethod();
  }

  private obsMethod() {
    Observable.interval(5000).subscribe(() => {
      this.booleanValue = !this.booleanValue;
    });
  }

  public getBooleanValue() {
    return this.booleanValue;
  }
}

I prepared three specs. First with simple instance of service created with new operator. And it works. Second with TestBed.get() injection. And it works too.

When I use inject in beforeEach spec does not work. But why? Is it problem with fakeAsync and inject used at the same time? How can I use them both together?

I created working demo on plunker with service and three specs.

http://ift.tt/2z0W0bL

How to configure flask setUp function in unittest with pymongo database

I'm new here. After developing small CRUD application in Flask I need to write some API tests but I really don't know how to configure setUp function with unittest. This is my init.py file in app folder.

from flask import Flask
from flask_cors import CORS
from flask_pymongo import PyMongo

app = Flask(__name__)

app.config.from_object('config')
app.config.from_envvar('APP_CONFIG_FILE')

CORS(app)
mongo = PyMongo(app)



from app.views.index import index_mod # noqa: 402
from app.views.lists import lists_routes # noqa: 402
from app.views.cards import cards_routes # noqa: 402

app.register_blueprint(index_mod)
app.register_blueprint(lists_routes, url_prefix='/board')
app.register_blueprint(cards_routes, url_prefix='/board/lists')

I made directory ./test/basic_test.py but also couldn't do: from app import app :( Below is example of route I want to test.

@lists_routes.route('/lists', methods=['GET', 'POST'])

def get_all_lists():
    lists = mongo.db.lists

if request.method == 'GET':
    output = []

    for q in lists.find():
        output.append({'name': q['name'], 'list_id': str(q['_id'])})

elif request.method == 'POST':
    name = request.json['name']

    list_post = lists.insert_one({'name': name}).inserted_id
    new_list = lists.find_one_or_404({'_id': list_post})

    output = {'name': new_list['name'], 'list_id': str(new_list['_id'])}

return jsonify(output)

Thank you for helping me.

How configure Gitlab CI to parse test output?

I am working on Gitlab and I would like to setting up a CI (it is the first time I configure something like that, please assume that I am a beginner)

I wrote a code in C with a simple test in Cunit, I configured CI with a "build" job and a "test" job. My test job succeed whereas I wrote a KO test, when I open the job on Gitlab I see the failed output, but the job is marked "Passed".

How can I configure Gitlab to understand that the test failed ?

I think there is a parsing configuration somewhere, I tried in "CI / CD Setting -> Test converage parsing" but I think it is wrong, and it did not work.

I let you the output of my test :

     CUnit - A unit testing framework for C - Version 2.1-2
     http://ift.tt/1xBUNha


Suite: TEST SUITE FUNCTION
  Test: Test of function::triple ...FAILED
    1. main.c:61  - CU_ASSERT_EQUAL(triple(3),1)

Run Summary:    Type  Total    Ran Passed Failed Inactive<br/>
              suites      1      1    n/a      0        0<br/>
               tests      1      1      0      1        0<br/>
             asserts      3      3      2      1      n/a<br/>

Elapsed time =    0.000 seconds

akka persistent actor testing events generated

By the definition of CQRS command can/should be validated and at the end even declined (if validation does not pass). As a part of my command validation I check if state transition is really needed. So let take a simple, dummy example: actor is in state A. A command is send to actor to transit to state B. The command gets validated and at the end event is generated StateBUpdated. Then the exact same command is send to transit to state B. Again command gets validated and during the validation it is decided that no event will be generated (since we are already in state B) and just respond back that command was processed and everything is ok. It is kind of idempotency thingy.

Nevertheless, I have hard time (unit) testing this. Usual unit test for persistent actor looks like sending a command to the actor and then restarting actor and check that state is persisted. I want to test if I send a command to the actor to check how many events were generated. How to do that?

Thanks

String equality fails while testing with regex escaped expression

I am testing a function where it returns a regex escaped version of the string and hence in test I am testing it with if it equals to the escaped string, but it fails with saying

Expected value to match:
  "4**2"
Received:
  "4\\*\\*2"

This is the test I have written:

describe("escapeRegExp", () => {
  // required for quotes test
  test("Escapes all RegExp characters", () => {
    expect(escapeRegExp("4**2")).toMatch("4\*\*2");});

where escapeRegExp function returns "4\*\*2" but it expects it to be "4**2".

But when I use

expect(escapeRegExp("4**2")).toMatchSnapshot("4\*\*2"); it works fine.

Any idea why it fails while checking it with .toBe or toEqual() ??

Is there any other field/argument that I have to add to make it work with toEqual or toMatchSnapshot is the way to go for this ?

products marketing help needed

I have a decent sized list in the men's self-help/dating space and I'm working on building out my follow up series. I realize my list quality and pre-sell have a major influence on my conversions, and that I'd have to do some serious testing/tweeking to optimize things.I went through many resources and also checked Small Business Video but did not get any help.Any suggestions?

Any help will be appreciated. Thank you.

How do I load nestest ROM?

I finished writing my 6502 emulator and I'm ready to start testing it. I found the nestest ROM with some documentation, but I'm not sure, what the proper way of loading the ROM is. The author says, that emulators should start at 0xC000, which contains 0 when I load the ROM, so I must be doing something wrong.

So right now my loading procedure looks something like this:

clear memory
set PC to 0x8000
open the file
skip first 16 bytes (iNES header)
load the rest of the file into RAM (starting at 0x8000)
set PC to 0xC000

Test function calls inside a resolved promise

export class SomeComponent extends Component {

  submitFunction() {
    this.props.functionCall1()
      .then(
        () => {
          this.props.functionCall2();
        },
        () => this.props.functionCall3()
      )
  }
}

I am trying to test that functionCall2 gets called when the promise is resolved, and test functionCall3 is called when the promise is rejected. I am using mocha, chai, enzyme, and sinon for testing. I have tried a lot, but could not get a working solution. That is what I have done so far:

it('calls functionCall1 when the form is submitted', () => {
  const stub = sinon.stub().returns(Promise.resolve());
  const props = {...someProps, functionCall1: stub};
  const container = new SomeComponent(props);
  container.submitFunction();

  expect(props.functionCall1).to.have.been.called();
  expect(props.functionCall2).to.have.been.called();
  expect(stub).to.have.been.called();
});

it('calls functionCall1 when the form is submitted', () => {
  const stub = sinon.stub().returns(Promise.reject());
  const props = {...someProps, functionCall1: stub};
  const container = new SomeComponent(props);
  container.submitFunction();

  expect(props.functionCall1).to.have.been.called();
  expect(props.functionCall3).to.have.been.called();
  expect(stub).to.have.been.called();
});

I appreciate any help.

samedi 28 octobre 2017

Please I was away from Testing & Software?

One Word I heard about SWT(esting) in the SWE disciplines of I.T. Industry in the last 3 to 5 month was "Keyword driven Testing" , relating to @Katalon @TestArchitect etc. & the Theory itself , other words were A.I. in Software Testing actually relating to , still automation , that's just how I see it , any other words , please ??!

Use a different `google-services.json` when testing in Android

I am working on an Android application that uses Firebase for data storage. I'd like to run JVM unit and instrumentation tests that will be in the app/src/tests and app/src/androidTest directories, but I don't want to use the production database google-services.json that is currently in the root /app folder. I have created a testing database in Firebase console with its own google-services.json file. How do I tell Android to use that google-services.json when running unit or instrumentation tests?

Random Forset algorithm

I want to use random forest algorithm. The error is "setting an array element with a sequence." type(s)=dataframe. type(data)=dataframe the error is in rf.fit(inputs_train, expected_output_train)

data_inputs = s[["year_int","month_of_year_int","day_of_month_int","day_of_week_int","day_of_year_int"]]


expected_output = data[["SchDep"]]
inputs_train, inputs_test, expected_output_train, expected_output_test = train_test_split(data_inputs, expected_output,test_size = 0.33, random_state = 42)
rf = RandomForestClassifier(n_estimators = 100)
rf.fit(inputs_train, expected_output_train)
accuracy = rf.score(inputs_test, expected_output_test)
print("Accuracy = {}% ".format(accuracy*100))

How to test dao layer without marking the classes transactional

I'm supposed to write unit tests for DAO layer. I have DAO classes like so:

@Repository
public class CustomerDaoImpl implements CustomerDao {

    @PersistenceContext
    private EntityManager em;

    // some methods ...
}

The application context in ApplicationContext.java is configured like so:

@Configuration
@EnableTransactionManagement
@EnableJpaRepositories
@ComponentScan(basePackages = "/*censored*/")
public class ApplicationContext {

    @Bean
    public JpaTransactionManager transactionManager() {
        return new JpaTransactionManager(entityManagerFactory().getObject());
    }

    @Bean
    public LocalContainerEntityManagerFactoryBean entityManagerFactory() {
        LocalContainerEntityManagerFactoryBean jpaFactoryBean = new LocalContainerEntityManagerFactoryBean();
        jpaFactoryBean.setDataSource(db());
        jpaFactoryBean.setLoadTimeWeaver(instrumentationLoadTimeWeaver());
        jpaFactoryBean.setPersistenceProviderClass(HibernatePersistenceProvider.class);
        return jpaFactoryBean;
    }

    @Bean
    public LocalValidatorFactoryBean localValidatorFactoryBean() {
        return new LocalValidatorFactoryBean();
    }

    @Bean
    public LoadTimeWeaver instrumentationLoadTimeWeaver() {
        return new InstrumentationLoadTimeWeaver();
    }

    @Bean
    public DataSource db() {
        EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder();
        EmbeddedDatabase db = builder.setType(EmbeddedDatabaseType.DERBY).build();
        return db;
    }
}

And the test class like so:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = ApplicationContext.class)
public class TestCustomerDao extends AbstractTestNGSpringContextTests 
{

    @Autowired
    private CustomerDao customerDao;

    @Test
    public void someTest() {
        // ... some test
    }
}

How do I run the unit test without having to mark the DAO class using the @Transactional annotation?

I read in other SO posts that @Transactional should be used at the service layer level. However, I'm supposed to write tests for DAO layer, without any service layer.

If I don't mark the DAO classes with the @Transactional annotation, I'm getting the following error: javax.persistence.TransactionRequiredException: No EntityManager with actual transaction available for current thread - cannot reliably process 'persist' call emerging from a method in the DAO class that is calling em.persist(entity).

Sorry if this is a stupid question, but I'm all new to this.

Typescript coverage seems incorrect with tsc

It seems that the coverage is not correct with the function that I'd like to test :

public isSelfTweet = function(tweet: Tweet, twitterScreenName: string) {
    if(null == tweet || null == tweet.user || null == tweet.user.screen_name) {
        return false;
    }
    if(tweet.user.screen_name.toLowerCase() === twitterScreenName) {
        return true;
    }
    return false;
}

I write three tests and I have a 100% coverage :

  • tweet = null, twitterScreenName = "toto"
  • tweet with screen_name = "toto", twitterScreenName = "toto"
  • tweet with screen_name = "toto", twitterScreenName = "tutu"

Here is the coverage which seems incorrect for me. I never have tested tweet not null, but tweet.user null for instance

enter image description here

In my package json :

"nyc": "^11.2.1",

How to end-to-end test my React.js code?

I try to use Selenium webdriver.io for E2E testing but implementation was relatively hard.

Is there any other way to do it?

Thanks

Intellij and sbt: How to make intellij to read custom configuration file for my play application tests?

For my test (play framework 2.6 app) I want to use different configuration file.

For this, I've added following line into build.sbt

javaOptions in Test += "-Dconfig.file=conf/application.test.conf"

When I run tests from sbt (sbt test), it works fine, reads the custom configuration file.

But when running tests from Intellij, it ignores this setting and uses application.conf file.

How to force intellij to use this sbt setting?

How transform JSX code in browser

Which answer is correct

  • With Babel stand alone build
  • Standard Babel and react preset
  • JSX transformer is recommended by Facebook starting from ReactJs v0.15
  • You cant use JSX in browser, must transpile it to ES5 before sending to browser

i choose 4 option (You cant...) but in right answer is 2(Standard B..), Can you explain WHY?

How to test django logout using client.post()?

I want to test if my user is logged out correctly. I can log in. I tried to logou the same way, but, the test fails. I can't figure out why.

Why this test fails?

def test_user_can_login_and_logout(self):
    response = self.client.post('/login/', {'username': 'login', 'password': 'pass'})
    user = auth.get_user(self.client)
    self.assertEqual(user.is_authenticated(), True)   # This works fine
    self.client.post('/logout/')
    self.assertEqual(user.is_authenticated(), False)  # This fails.

My urls.py

from django.conf.urls import url
from django.contrib.auth import views as auth_views

urlpatterns = [
        (...)
        url(r'^login/$', auth_views.login, {'template_name': 'home/login.html'}, name='login'),
        url(r'^logout/$', auth_views.logout, name='logout'),
        (...)
        ]

In question I skipped the code responsible for creating test user in database.

How to test cache with Mockito

So i have this class which count unique symbols .This method contain cache too.I need to test exactly cahce . I know for this target can use Mockito, however i have some problems with it.

public class UniqueCharacterCounterService {
    private Map<String, Map<Character, Integer>> cache = new HashMap<>();

public Map<Character,Integer> countCharacters(final String inputData) throws IncorrectInputDataException {
    if (inputData == null) {
        throw new IncorrectInputDataException("The method parameter can't be null");
    }
    Map<Character, Integer> value = cache.get(inputData);
    Map<Character, Integer> result = new LinkedHashMap<>();
    if (value != null) {
        return value;
    }
    char symbol;
    Integer counter;
    char inputAsCharArray[] = inputData.toCharArray();
    for (int j = 0; j < inputAsCharArray.length; j++) {
        symbol = inputAsCharArray[j];
        counter = result.get(symbol);
        if (counter != null) {
            result.put(symbol, counter + 1);
        } else {
            result.put(symbol, 1);
        }
    }
    cache.put(inputData, result);
    return result;
}

public void printResult(Map<Character, Integer> input) {
    input.forEach((key, value) -> System.out.println("\"" + key + "\" " + "- " + value));
}

}

Main

> public class Main {
    final static Logger logger = Logger.getLogger(Main.class);

    public static void main(String[] args) throws IncorrectInputDataException {
        try {
            UniqueCharacterCounterService characterCounter = new UniqueCharacterCounterService();
            Map<Character, Integer> result = characterCounter.countCharacters("Hello");
            characterCounter.printResult(result);
            result = characterCounter.countCharacters("world");
            characterCounter.printResult(result);
            result = characterCounter.countCharacters("Hello");
            characterCounter.printResult(result);
        } catch (IncorrectInputDataException e) {
            logger.error(e.getMessage());
        }
    }
}`

Unable to Drag & Drop in Selenium WebDriver 3.6

I am trying to drag & drop but its not working.

Here is my code.

Please help, I have applied so much time on this, but it is still not working.

Chrome Version: 62.0.3202.75 ChromeDriver : 2.33 Selenium : 3.6

public class Drag_And_Drop {
static String baseURl="https://www.google.com";
static WebDriver driver;

@BeforeMethod
public void openBrowser() {     
    System.setProperty("webdriver.chrome.driver", System.getProperty("user.dir") + "/drivers/chromedriver.exe");
    driver=new ChromeDriver();
    driver.get(baseURl);
    driver.manage().window().maximize();
    driver.manage().timeouts().implicitlyWait(2000, TimeUnit.SECONDS);
}

@Test
public void verifyCount() {

    WebElement searchBox = driver.findElement(By.xpath(".//*[@id='lst-ib']"));
    searchBox.sendKeys("jqwidget drag and drop");
    searchBox.sendKeys(Keys.ENTER);     

    WebElement link = driver.findElement(By.linkText("jQuery DragDrop, DragDrop plug-in, Drag and Drop ... - jQWidgets"));
    link.click();       

    driver.switchTo().frame(0);

    WebElement source = driver.findElement(By.xpath(".//*[@id='jqxWidgete3128591f541']"));
    source.click();

    WebElement target = driver.findElement(By.xpath(".//*[@id='cart']"));       

    Actions actions = new Actions(driver);
    actions.dragAndDrop(source, target).build().perform();
}   

@AfterMethod
public void closeBrowser() {
        driver.quit();
}

}

vendredi 27 octobre 2017

How to compare lists in pytest using assert statement?

I am try to compare results of two lists using assert statement to test the function using pytest. Whenever I try to compare using assert statement, my expected result is automatically converted as string with double quotes. This is so far I have done:

# read data from xlsx file 

book = xlrd.open_workbook("nameEntityDataSet.xlsx")

first_sheet = book.sheet_by_index(0) # get first worksheet

# take second element(index starts from 0) from first row

se1 = first_sheet.row_values(1)[1]

print "se1 ", se1 #returns me list:
# take third element(index 2) from first row 

se2 = first_sheet.row_values(1)[2]

expected_output = first_sheet.row_values(1)[3]

#expected output is printed as:  [['UAE', 'United'], ['UAE', 'Arab'], ['UAE', 'Emirates']]

current_output = [['UAE', 'United'], ['UAE', 'Arab'], ['UAE', 'Emirates']]

#When I do assert to compare both outputs
assert current_output == expected_output

#I get error: 

E    assert [['UAE', 'United'], ['UAE', 'Arab'], ['UAE', 'Emirates']] == "[['UAE', 'United'], ['UAE', 'Arab'], ['UAE', 'Emirates']]"
test_wordAligner.py:15: AssertionError

I do not know, why it adds double quotation marks with expected output, due to this reason I am getting AssertionError.I have searched a lot but did not find any helpful answer in my case. Please note that I want solution in pytest not unittest

Thanks in advance

How can I simulate User Input in a JUnit Test?

I have the following code to test:

public IGrid createIGrid() {
    while(size < 0){
        userInput();
    }
    GridArray gamefield= new GridArray(size);
    return gamefield;
}

public void userInput(){
    System.out.println("Bitte geben Sie die größe des Grid an. Der Wert muss größer als 0 sein.");
    int size = scanner.nextInt();
    if (size < 1) {
        throw new ImpossibleValueForArrayException();
    }
}

Now I want to test it. What can I do?

How to access Activity Object in Appium Testcase

How to access Activity Object in Appium Testcase.

I get driver.currentActivity(); return string .

So that activity object can not access.

A/B testing website redesign and google reaction

I have an old website which I migrated to a newer technology and I would like to split traffic before doing a 100% rollout. I'm thinking to redirect random 10% traffic to new website and rest - to the old one.

Have anyone done something similar to that? I'm wondering how does google react to such things, would they consider it as a cloaking and penalize us for doing so?

Lumen - seeder in Unit tests

I'm trying to implement unit tests in my company's project, and I'm running into some weird trouble trying to use a separate set of data in my database.

As I want tests to be performed in a confined environment, I'm looking for the easiest way to input data in a dedicated database. Long story short, to this extent, I decided to use a MySQL dump of inserted data.

This is basically my seeder code:

public function run()
{
    \Illuminate\Support\Facades\DB::unprepared(file_get_contents(__DIR__ . '/data1.sql'));
}

Now here's the problem. In my unit test, I can call the seeder, but :

  • If I call the seeder in the setUpBeforeClass(), it work. Although it doesn't fit my needs as I want to be able to invoke different sets of data for different tests
  • If I call the seeder within a test, the data is never inserted in the database (either with or without the transaction trait).
  • If I use DB::insert instead of ::raw or ::unprepared or ::statement without using a raw sql file, it works. But my inserts are too complicated for that.

Here's a few things I tried with the same results :

    DB::raw(file_get_contents(__DIR__.'/database/data1.sql'));
    DB::statement(file_get_contents(__DIR__ . '/database/data1.sql'));

    $seeder = new CheckTestSeeder();
    $seeder->run();

    \Illuminate\Support\Facades\Artisan::call('db:seed', ['--class' => 'CheckTestSeeder']);

    $this->seeInDatabase('jackpot.progressive', [
        'name_progressive' => 'aaa'
    ]);

Any pointers on how to proceed and why I have different behaviors if I do that in the setUpBeforeClass() and within the test would be appreciated!

Selenium WD with Java - unable to do assertion with converted Double to String

 String ValidPricePoint="";  Double sumOfMinorRisk;
String sum1,sum2
 ValidPricePoint=ValidPricePoint.replaceAll(" ","");
            sumOfMinorRisk = Double.valueOf(ValidPricePoint);
            sum1=String.format ("%,f", sumOfMinorRisk); 
            sum1=sum1.replaceAll("\\.?0*$", "");
              sum1=sum1.replaceAll(",", "");
            //   sum1=sum1.replaceAll("\\.?0*$", "");
       sum2=String.format ("%,.0f", sumOfMinorRisk*2);
  Assert.assertEquals(driver.findElement(pdeath_risk_sum).getAttribute("value"), risk_sum1);

Hello everyone,

I have a little problem here, i am trying to convert String into Double to make some adjustments and than convert it again to String with right formatting. String is taken from website element in it's value can be for example "700 000".

So I get the value, than i remove the space in that value to make first conversion possible. After that i make adjustment like multiply that value by 2 (since that's how it changes on a website) and than return that value to previous state so in the end i should get "1 400 000".

Now the problem is that the Test fails in assertion and it gives me error that it expects 700 000 but found 700 000 (that's literally an error i got). I admit that after multiplication i got value of 1 400 000.0000 but as you can see i used

sum1=sum1.replaceAll("\\.?0*$", "");sum1=sum1.replaceAll(",", "");

or much simpler "(,.0f", variable with value) both methods give same String value that is (or is supposed to be) "1 400 000" or with variable sum1 it is "700 000".

Any ideas where i fail with my logic or how to get the assertion right in this case.

Side Note : If i use variable sum1 like this sum1 = "700 000" the assertion will not fail hence the test will pass but it will not do so if the data changes and with it the value on the website (because its calculated on fly).

Thank you for any help. Cheers!

NoMethodError: undefined method `get' for RSpec

First of all, this is my first experience with ruby. At this moment, I'm creating tests for the a Controller called Exporter in my application. The method of the Controller I want to test is this:

def export_as_json(equipments)
    equipments_json = []
    equipments.each {|equipment|
        equipment_json = {
            :id => equipment.id,
            :title => equipment.title,
            :description => equipment.description,
            :category => equipment.category_id
        }
        equipments_json << equipment_json
    }

    respond_to do |format|
      format.json { render :json =>equipments_json }
    end
end

So, when I try to create a request for this method using this:

RSpec.describe ExporterController, type: :controller do
  get '/equipments/all', headers: { 'CONTENT_TYPE' => 'application/json' }, format: :json
  expect(response.response).to eq(200)
end

inside the exporter_controller_test.rb file I'm receiving this error:

NoMethodError: undefined method `get' for RSpec::ExampleGroups::ExporterController:Class

In Laravel. when I try to execute phpunit it only ouputs help information and no test results

I have no idea why it's not running the tests but instead shows this information. I'm executing it from the vendor/bin folder and I'm using the example tests for now. When I try it on a fresh install of a Laravel project, everything is fine. I think I messed up something in my project somewhere but don't know where. Here is the output when I try to run it:

PHPUnit 5.7.23 by Sebastian Bergmann and contributors.

Usage: phpunit [options] UnitTest [UnitTest.php]
   phpunit [options] <directory>

Code Coverage Options:

  --coverage-clover <file>  Generate code coverage report in Clover XML format.
  --coverage-crap4j <file>  Generate code coverage report in Crap4J XML format.
  --coverage-html <dir>     Generate code coverage report in HTML format.
  --coverage-php <file>     Export PHP_CodeCoverage object to file.
  --coverage-text=<file>    Generate code coverage report in text format.
                        Default: Standard output.
  --coverage-xml <dir>      Generate code coverage report in PHPUnit XML format.
  --whitelist <dir>         Whitelist <dir> for code coverage analysis.
  --disable-coverage-ignore Disable annotations for ignoring code coverage.

Logging Options:

  --log-junit <file>        Log test execution in JUnit XML format to file.
  --log-teamcity <file>     Log test execution in TeamCity format to file.
  --testdox-html <file>     Write agile documentation in HTML format to file.
  --testdox-text <file>     Write agile documentation in Text format to file.
  --testdox-xml <file>      Write agile documentation in XML format to file.
  --reverse-list            Print defects in reverse order

Test Selection Options:

  --filter <pattern>        Filter which tests to run.
  --testsuite <name>        Filter which testsuite to run.
  --group ...               Only runs tests from the specified group(s).
  --exclude-group ...       Exclude tests from the specified group(s).
  --list-groups             List available test groups.
  --list-suites             List available test suites.
  --test-suffix ...         Only search for test in files with specified
                        suffix(es). Default: Test.php,.phpt

Test Execution Options:

  --report-useless-tests    Be strict about tests that do not test anything.
  --strict-coverage         Be strict about @covers annotation usage.
  --strict-global-state     Be strict about changes to global state
  --disallow-test-output    Be strict about output during tests.
  --disallow-resource-usage Be strict about resource usage during small tests.
  --enforce-time-limit      Enforce time limit based on test size.
  --disallow-todo-tests     Disallow @todo-annotated tests.

  --process-isolation       Run each test in a separate PHP process.
  --no-globals-backup       Do not backup and restore $GLOBALS for each test.
  --static-backup           Backup and restore static attributes for each test.

  --colors=<flag>           Use colors in output ("never", "auto" or "always").
  --columns <n>             Number of columns to use for progress output.
  --columns max             Use maximum number of columns for progress output.
  --stderr                  Write to STDERR instead of STDOUT.
  --stop-on-error           Stop execution upon first error.
  --stop-on-failure         Stop execution upon first error or failure.
  --stop-on-warning         Stop execution upon first warning.
  --stop-on-risky           Stop execution upon first risky test.
  --stop-on-skipped         Stop execution upon first skipped test.
  --stop-on-incomplete      Stop execution upon first incomplete test.
  --fail-on-warning         Treat tests with warnings as failures.
  --fail-on-risky           Treat risky tests as failures.
  -v|--verbose              Output more verbose information.
  --debug                   Display debugging information during test     execution.

  --loader <loader>         TestSuiteLoader implementation to use.
  --repeat <times>          Runs the test(s) repeatedly.
  --teamcity                Report test execution progress in TeamCity format.
  --testdox                 Report test execution progress in TestDox format.
  --testdox-group           Only include tests from the specified group(s).
  --testdox-exclude-group   Exclude tests from the specified group(s).
  --printer <printer>       TestListener implementation to use.

Configuration Options:

  --bootstrap <file>        A "bootstrap" PHP file that is run before the tests.
  -c|--configuration <file> Read configuration from XML file.
  --no-configuration        Ignore default configuration file (phpunit.xml).
  --no-coverage             Ignore code coverage configuration.
  --no-extensions           Do not load PHPUnit extensions.
  --include-path <path(s)>  Prepend PHP's include_path with given path(s).
  -d key[=value]            Sets a php.ini value.
  --generate-configuration  Generate configuration file with suggested settings.

Miscellaneous Options:

  -h|--help                 Prints this usage information.
  --version                 Prints the version and exits.
  --atleast-version <min>   Checks that version is greater than min and exits.

perform testing for Fragments in Android

I want to test Fragment.I am able to test Instrumentation test only. I want to perform click and set values for the textview. How can it is possible. Here is my code.Please any one help me to resolve this issue.

 @RunWith(AndroidJUnit4.class)
 public class LearnFactAdditionTest extends 
  ActivityInstrumentationTestCase2<MainActivity> {
 private MainActivity testingActivity;
  private Learn_FactsFragment testFragment;

public LearnFactAdditionTest() {
    super(MainActivity.class);
}

@Override
protected void setUp() throws Exception {
    super.setUp();


    testingActivity = getActivity();

    testFragment = new Learn_FactsFragment();
    testingActivity.getFragmentManager().beginTransaction().add(R.id.frag,testFragment,null).commit();

    getInstrumentation().waitForIdleSync();
}

@Test
public void testGameFragmentsTextViews() {

    String empty = "";

    TextView textView = (TextView)testFragment.getView().findViewById(R.id.txt_first_num_add);
    assertTrue("Empty stuff", (textView.getText().equals(empty)));
}

public void testMainThread_operations() {
    Espresso.onView(ViewMatchers.withId(R.id.txt_first_num_add))
            .perform(ViewActions.typeText("Hi"));
}

}

I am getting error like this No instrumentation registered! Must run under a registering instrumentation.

Flask-Testing: how to hook up routes?

I'm new to Flask and am trying to understand how to initialise the app in tests so that the routes are hooked up. According to the Flask-Testing docs this is done with (for the LiveServerTestCase):

class MyTest(LiveServerTestCase):

    def create_app(self):
        app = Flask(__name__)
        app.config['TESTING'] = True
        # Default port is 5000
        app.config['LIVESERVER_PORT'] = 8943
        # Default timeout is 5 seconds
        app.config['LIVESERVER_TIMEOUT'] = 10
        return app

However, I have a separate module containing the views using @app.route pattern to register them. My question is, how can I register these routes in the testcase so that the app is cleanly created and torn down? Currently I can only make this work by importing the app from the views module itself, which feels like the wrong way of doing things. What is the usual way of setting this up?

Simple test case management tool with test execution details

Firstly it's not "too broad" question "please advice some good TCMS". I have a specific problem and I would ask you to have in mind it.

In my team we're now trying to select some Test Case Management System to use. Our requirements are not rocket science, I believe:

  • Simple enough to store test cases and execution results. No need for built-in defects and requirements management.
  • Some API to be able to import automated test execution results (would be great, if there is Jenkins plugin)
  • And the most crucial part, I'm becoming really desperate about it since I've already tried almost all "popular" TCMS and they have really nothing here. Custom fields for each test execution. What I mean - often the idea in existing TCMS is "Test plan with test cases - Test run - Test case execution". So you have some entity called "Test run" which is group of tests to execute. If you're lucky enough, there is support for some custom stuff for test run (i.e. environment settings - OS, browser, hardware). So it seems that many teams are happy with creating "Windows Chrome testing", "Linux Firefox testing" test runs and so on. But in our team it's not acceptable, because we prefer to see "product version A testing" and table of test executions with different parameters. So each "row" in the table - test case + environment settings + status + bug links. Obviously, test cases can be duplicated, because we can execute one test case on 5*5*5 different environments. It's not feasible to create 5*5*5 test runs!

Only systems I've seen the last bullet are TestPad (which is mostly just checklist without any smart functionality) and HP-ALM (which is ancient, ugly and slow). Everything else allows you to have only "comment" and "bugs" fields for test execution entity.

So, my questions are:

  1. Have you met the same issue in your team? How did you make it finally?
  2. Could you advice me and my team anything that we can use?

P.S. Some of the tools I've tried so far: Zephyr for Jira, HP ALM, Kiwi, qTest, TestCaseLab, practiTest, TestPad

How do I set a different system property for each jmv gradle spawns when testing in parallel?

Gradle allows me to start multiple jvms for testing like so:

test {
   maxParallelForks = 10
}

Some of the tests for an application I have requires a fake ftp server which needs a port. This is quite easy to do with one jvm:

test {
   systemProperty 'ftpPort', 10000
}

However, when running in parallel I would need to start 10 fake ftp servers. How do I add a custom system property for each jvm spawned by gradle?

Something like:

test {
   maxParallelForks 10
   customizeForks { index ->
       systemProperty 'ftpPort', 10000 + index
   }
}

jUnit run certain test cases from different test classes each having multiple tests

How do I run certain selected test cases from different test classes each of which has multiple tests in it?

For example, I have 3 test classes TestClassA, TestClassB and TestClassC. Each of these test classes is having 3 tests.

public class TestClassA {
  @Test
  public void testA1() {
    //test method
  }

  @Test
  public void testA2() {
    //test method
  }

  @Test
  public void testA3() {
    //test method
  }
}


public class TestClassB {
  @Test
  public void testB1() {
    //test method
  }

  @Test
  public void testB2() {
    //test method
  }

  @Test
  public void testB3() {
    //test method
  }
}


public class TestClassC {
  @Test
  public void testC1() {
    //test method
  }

  @Test
  public void testC2() {
    //test method
  }

  @Test
  public void testC3() {
    //test method
  }
}

With reference to the above sample code, I would like to run tests testA1, testA2, testB2, testC2 and testC3 all at once. Is there any configuration for the same? Thank you

How to validate my xml parsing code? (Python, against schema or example document)

So for a project I need to parse a very specific xml format. Parsing just the most basic stuff is easy, but then there are always many cases not covered. F.e. if I take one example doc maybe it has only one tag of a specific element, but the next one will have multiple tags, and so on and so forth.

Because of this, I would like to take a more systematic approach, where I can validate my parsing code against either the schema or an example doc (and write tests for it). The problem with the example doc is that I do not have one, all docs I parse seem to have slight (valid) differences so that I cannot validate against one of these.

Searching and other questions only talk about validating the xml documents, but those are expected to be valid already. The format in question is this http://ift.tt/29cazOn (though I guess it shouldn't matter)

1 TB of Test data needed with all possible file formats

We need to test an application with minimum of 1 TB of test data with all text/imgage file formats. Is it possible to download somewhere ? Any torrent or links available ? Please help !! Thanks in advance !!

jeudi 26 octobre 2017

React javascript testing with Rails webpacker

Are there any options out there for testing Rails apps that use webpacker?

TestNG: print failed, passed, total tests executed in @aftersuite

Is there any way that I can print failed, passed, total tests executed in @aftersuite.

I know this can be done overriding ITestListener but is there any direct approach that I can get count in @aftersuite?

Thanks for help in advance.

Run each cucumber test independently

I'm using cucumber and maven on eclipse and what I'm trying to do is run each test independently. For example I have a library system software that basically allows users to borrow books and do other stuff.

One of the conditions is that users can only borrow a max of two books so I wrote to make sure that the functionality works. This is my feature file:

Scenario: Borrow over max limit
Given "jim@help.ca" logs in to the library system
When "Sun@help.ca" order his first book with ISBN "9781611687910"
And "Sun@help.ca" order another book with ISBN "9781442667181"
And "Sun@help.ca" tries to order another book with ISBN "1234567890123"
Then Sun will get the message that says "The User has reached his/her max number of books"

I wrote a corresponding step definition file and every worked out great. However, in the future I want to use the same username ("Sun@help.ca") for borrowing books as though Sun@help.ca has not yet borrowed any books. I want each test to be independent of each other.

Is there any way of doing this...maybe there's something I can put into my step definition classes. If there's a way please help me. Any help is greatly appreciated and I thank you in advance!

Why use fixtures instead of global variables when testing?

I'm new to testing, please bear with me. If I'm lacking fundamental conceptual knowledge, let me know in a comment. I'd like to improve this question (or realize it shouldn't be asked), but I can't tell what's wrong from downvotes alone.

Question

When using pytest, is there a downside to using global variables instead of fixtures?

Code (with a global variable)

DATASET = FlickrPortraitMaskDataset(root=ROOT)

class TestFlickrPortraitMaskDataset(object):

    def test_len(self):
        nb_portraits = len(get_fnames(ROOT + "portraits/"))
        assert len(DATASET) == nb_portraits

    def test_getitem(self):
        idx = random.randint(0, len(DATASET) - 1)
        sample = DATASET[idx]
        assert isinstance(sample, dict)

Output

TestFlickrPortraitMaskDataset::test_len PASSED
TestFlickrPortraitMaskDataset::test_getitem PASSED

Code (with a fixture)

@pytest.fixture(scope="module")
def dataset():
    return FlickrPortraitMaskDataset(root=ROOT)

class TestFlickrPortraitMaskDataset(object):

    def test_len(self, dataset):
        nb_portraits = len(get_fnames(ROOT + "portraits/"))
        assert len(dataset) == nb_portraits

    def test_getitem(self, dataset):
        idx = random.randint(0, len(dataset) - 1)
        sample = dataset[idx]
        assert isinstance(sample, dict)

Output

TestFlickrPortraitMaskDataset::test_len PASSED
TestFlickrPortraitMaskDataset::test_getitem PASSED

The output is the same.


Common

import random

import pytest

from portraitseg.utils import get_fnames
from portraitseg.pytorch_datasets import FlickrPortraitMaskDataset


ROOT = "../data/portraits/flickr/cropped/"

Android 27: LogLogcatRule and RuleLoggingUtils replacement?

How does one test log output for Android classes. Prior to Android 27 LogLogcatRule and related classes like RuleLoggingUtils served these purposes, but it appears they were removed.

Is there any Desktop Application testing tool that doesn't require interactive desktop?

I want to perform regression testing on Desktop Applications and I am able to do it with the help of AutoIT and codedUI, but the problem is that it needs an interactive desktop. While running test cases on the server where no monitor is available, cases fails there because of no interaction available.

how to automatically change directory for all doctests running pytest

I would like to change to pytest temporary directory at the beginning of each doctest and I was wondering if there is a way to do it automatically without starting every doctests with:

>>> tmp = getfixture('tmpdir')                                                                                            
>>> old = tmp.chdir()                                                                                                   

Angular2 Test (jasmine) mocking Service. Can't resolve all parameters for Service: (?, ?)

i am trying to mock following service to test my component:

Production:

@Injectable()
export class UserService extends DtoService {
    // some not relevant stuff.
}

@Injectable()
export abstract class DtoService {
    constructor(private http: Http, private authHttp: AuthHttp) {}

    get() {
        return this.http.get(...);
    }
}

@Component({
    selector: 'users',
    templateUrl: 'users.component.html',
    styleUrls: ['users.scss'],
    providers: [UserService]
})
export class UsersComponent implements OnInit, OnDestroy {
    constructor(private userService: UserService) {}
    // etc
}

Test:

class MockUserService {
    public get(): Observable<User> {
        return Observable.of(new User({id: 1, email: 'user1@test.com'}));
    }
}

beforeEach(async(() => {

    TestBed
        .configureTestingModule({
            declarations: [UsersComponent],
            providers: [{provide: UserService, useClass: MockUserService}]
        });
    this.fixture = TestBed.createComponent(UsersComponent);

}));

It is supposed to be the most classical use-case, right (see angular test doc)? But here I get the Error:

Can't resolve all parameters for UserService: (?, ?).

But UserService is not supposed to be instantiated at all. What am I doing wrong? Thank you community!

Python Testing a CLI application

First off: I am quite new to the field of testing. So there might be some things wrong in my understanding.

My situation is the following: I'm working on a framework-ish where the end-user has to inherit from a parent class I've predefined. They're required to override four abstract methods I've defined in the parent class, and write there magic code inside of these methods.

The framework offers three different modes to run in. Because I want to ensure no mistakes are being made I've added a positional CLI argument to set that mode.

Here is my problem: I'm trying to write a unittest for every module in my application. Because the parent class mentioned above has quite some code, I'd like to unittest this as well.

Needless to say, it will not run without providing the required positional argument.

I've looked around to see how I can provide this argument programmatically, but the internet tells me that a unittest should be able to run without any outside information (makes sense).

My question: How do I test such a module? Are there other types of tests that are suited for these kinds of problems?

Thank you in advance!

Firebase says "You do not have permission to create a test matrix with Test Lab." when trying to launch robotest

When i try to launch a test with Firebase i recive this message: enter image description here

What could be the problem?

Functional tests and DB

So question is - is it a good practice to ask DB for quantity of records in table to make test passed or even create some records right from functional test?

Method that returns two classes in java

I have two classes:

public class UnoLoginPageUi {
    public final Input username = new Input("id=username");
    public final Input password = new Input("id=password");
    public final Button loginButton = new Button("name=login");
}

and

public class DuoLoginPageUi {
    public final Input username = new Input("id=usernameField");
    public final Input password = new Input("id=passwordField");
    public final Button loginButton = new Button("id=submitButton");
}

and in one common class I want to make something like that:

public void loginUsingUsernameAndPassword(String username, String password, String pageType) {
    getUi(pageType).username.waitForToBeDisplayed();
    getUi(pageType).username.setValue(username);
    getUi(pageType).password.setValue(password);
    getUi(pageType).loginButton.click();
}

where getUi() is a method that gas argument pageType (which is UNO or DUO).

private Class getUi(String pageType) {
    if (pageType.equals("UNO")) {
        return new DuoLoginPageUi();
    }
    else if (pageType.equals("DUO")) {
        return new UnoLoginPageUi;
    }
    return null;
}

However it doesn't work as this method need to in type of this two pages with selectors - how to deal with that ?

Execution failed for task ':compileIntegrationTestGroovy'

I created an application with

grails create-app TestApp

I've created test with

grails create-functional-test Test

When I tried to start test with

grails test-app

It gave me an error:

Execution failed for task ':compileIntegrationTestGroovy'.

The question is how to run tests and solve this problem? Maybe I've missed some dependencies?

What parameters to setup in Jmeter

I have a scenario where:

For every 30 seconds 10,000 users will hit the server with a difference of 0.5 seconds (approx) between each user's request, and this test should be done for 1 hour.

Now How do i set up all the JMeter parameters to achieve this.

When should i use more than one Thread groups?

Salesforce Test Class Failed when I want to validate in the production

I am beginner in salesforce APEX coding. Currently I am trying to make a scheduler for my org where i want to update my ManagerID form the UserRole.

In my testing ORG this code is showing 93% of code coverage but when I want to deploy in the production, The code coverage is only 67%. Can anybody help me with the below code:

AstroUpdateManagerID

public class AstroUpdateManagerID{
public List<UserRole> UserRoleList1(){
    List<UserRole> userRole = new List<UserRole>();
    userRole= [select Id, Name, ParentRoleId from UserRole];
    return userRole;
}



List <UserRole> role =UserRoleList1();

Map <Id, User> userParentMap = new Map <Id, User>();
Map <Id, UserRole> userRoleMap = new Map <Id, UserRole>();




public User HandleParentNull(UserRole userroleobj){
        User userobj = new User();
        userobj  = userParentMap.get(userroleobj.ParentRoleId);
        if(userobj == null ){
            UserRole userRole2 = userRoleMap.get(userroleobj.ParentRoleId);
               userobj = HandleParentNull(userRole2);
        }
return userobj;

}




public String get_Manager_ID_Where_Partner_Owner_Is_NA(User u){
                UserRole userRoleOBJ = userRoleMap.get(u.UserRoleId);
                return HandleParentNull(userRoleOBJ).Id;
}



public List<User> UserIdentification(List<User> user){


List<User> AllUserList = new List<User>();

        //Find User role by UserRole.Id
        for(UserRole ur: role){
            userRoleMap.put(ur.Id, ur);
        }


             //Find user by User.UserRoleId
            for(User userparent: user){
            userParentMap.put(userparent.UserRoleId, userparent);
        }

    for (User u: user){
        if(u.Partner_User_Owner__c != 'NA'  && u.UserRole.Name != 'Director of Sales' && u.Profile.Name != 'System Administrator'){
                u.ManagerId = u.Partner_User_Owner__c;
                AllUserList.add(u);
            }else if(u.UserRole.Name != 'Director of Sales'  && u.Profile.Name != 'System Administrator'){
                        u.ManagerId =   get_Manager_ID_Where_Partner_Owner_Is_NA(u);
                        AllUserList.add(u);
                    }
    }
return AllUserList;


}

}

AstroUpdateManagerIdBatch

global class AstroUpdateManagerIdBatch implements Schedulable{

    global void execute(SchedulableContext sc) {
        AstroUpdateManagerID umi = new AstroUpdateManagerID();

        List<User > AllUser = new List<User >();        
        AllUser = [select Id, Name, UserName, ManagerId, UserRoleId, UserRole.Name, isActive, Partner_User_Owner__c, Profile.Name  from User Where IsActive=true and UserRole.Name!=null];   
        update umi.UserIdentification(AllUser);


    }
}

TestDataAccessClassManagerID

@isTest(SeeAllData=true)
public class TestDataAccessClassManagerID {      

    @isTest 
    static void myTestMethodForManager() {

      List <User> AllUserList = [select Id, Name, UserName, ManagerId, UserRoleId, UserRole.Name, isActive, Partner_User_Owner__c, Profile.Name  from User Where IsActive=true and UserRole.Name!=null];
      AstroUpdateManagerID umi = new AstroUpdateManagerID();

      List <User> userListUpdate = umi.UserIdentification(AllUserList);


     AstroUpdateManagerIdBatch bupdate = new AstroUpdateManagerIdBatch ();
    String sch = '0 0 23 * * ?'; 

    Test.startTest();
        system.schedule('Test Territory Check', sch, bupdate );
        update userListUpdate;


    Test.stopTest();

   }

}

TFS - add all test cases from test suite

I have a problem regarding copying test cases from one test suite to another. I want to copy all test cases in specific test suite to another one (or copy test suite with all test cases from one test plan to another). I try to do so with query and with no luck for now.

app.PressEnter() on Xamarin UI.test

I'm trying to do app.PressEnter(); when the keyboard appears on my iPad. The type of keyboard that appears has the option to "search" instead of "intro". On a keyboard with the "intro" option it works.

Any idea for it to work?

mercredi 25 octobre 2017

Is there any way to revert Clipboard when getting the error OpenClipboard Failed (Exception from HRESULT: 0x800401D0 (CLIPBRD_E_CANT_OPEN))

I have seen other threads on this that pertain to WPF but not to tests or general use. My question is if there is a way to revert to behavior of default Clipboard behavior. I literally rebooted my machine and did nothing but a single test in visual studio that works for others but not for me. I believe it was due to installing 3D Clipboard and I am fine with uninstalling and reverting but I am curious if there is a way to tell clipboard options to go back to default behaviors. Or else see what is using it. I have never had this problem and was curious if anyone else has had similar issues.

How you tests application when test cases written by some other company

how you tests application when test cases written by some other company say some other testers from x company has written manual test cases and now my company A have been asked to understand and execute these test cases and show case the results to client.

What will be the ideal way?because i cant rewrite those cases

Test Apex Class

i need some help to put my visual page on production.

It works OK on devel but i cant create the test.

The visualpage has six filters and then run query

The code from the report controlles is the following.

     public void ReportFilter()
         { 
         
         if(selectedCCYId != null && selectedFilterDate == null )
         {quo= [SELECT Order_Date__c, Id, Name, Platform_Name__c, Product__c,  related_product__r.Country__c,related_product__r.SKU__c, related_product__r.Enabled__c, related_product__r.Default_provider__c, related_product__r.Default_provider__r.Name, related_product__c, related_product__r.Tipo_producto__c,related_product__r.Brand__c,related_product__r.Brand__r.Name,related_product__r.Tax__c, related_product__r.CurrencyIsoCode, related_product__r.Cost__c, related_product__r.Discount__c, related_product__r.Discounted_cost__c, Quantity__c  FROM Rew_Order__c where Status__c = 'Pending' and Purchase_Order__c = ''  and  Platform_Name__c LIKE :filter1 and  related_product__r.Default_provider__r.Name LIKE :filter2 and related_product__r.Tipo_producto__c LIKE :selectedVendorId and related_product__r.Country__c LIKE :Ctry and related_product__r.CurrencyIsoCode LIKE :selectedCCYId Order by Order_Date__c];}
         if(selectedCCYId == null && selectedFilterDate == null )
         {quo= [SELECT Order_Date__c, Id, Name, Platform_Name__c, Product__c,  related_product__r.Country__c,related_product__r.SKU__c, related_product__r.Enabled__c, related_product__r.Default_provider__c, related_product__r.Default_provider__r.Name, related_product__c, related_product__r.Tipo_producto__c,related_product__r.Brand__c,related_product__r.Brand__r.Name,related_product__r.Tax__c, related_product__r.CurrencyIsoCode, related_product__r.Cost__c, related_product__r.Discount__c, related_product__r.Discounted_cost__c, Quantity__c  FROM Rew_Order__c where Status__c = 'Pending' and Purchase_Order__c = ''  and  Platform_Name__c LIKE :filter1 and  related_product__r.Default_provider__r.Name LIKE :filter2 and related_product__r.Tipo_producto__c LIKE :selectedVendorId and related_product__r.Country__c LIKE :Ctry and related_product__r.CurrencyIsoCode LIKE :Ccy  Order by Order_Date__c];}    
         if(selectedCCYId != null && selectedFilterDate == 'ALL' )
         {quo= [SELECT Order_Date__c, Id, Name, Platform_Name__c, Product__c,  related_product__r.Country__c,related_product__r.SKU__c, related_product__r.Enabled__c, related_product__r.Default_provider__c, related_product__r.Default_provider__r.Name, related_product__c, related_product__r.Tipo_producto__c,related_product__r.Brand__c,related_product__r.Brand__r.Name,related_product__r.Tax__c, related_product__r.CurrencyIsoCode, related_product__r.Cost__c, related_product__r.Discount__c, related_product__r.Discounted_cost__c, Quantity__c  FROM Rew_Order__c where Status__c = 'Pending' and Purchase_Order__c = ''  and  Platform_Name__c LIKE :filter1 and  related_product__r.Default_provider__r.Name LIKE :filter2 and related_product__r.Tipo_producto__c LIKE :selectedVendorId and related_product__r.Country__c LIKE :Ctry and related_product__r.CurrencyIsoCode LIKE :selectedCCYId Order by Order_Date__c];}
         if(selectedCCYId == null && selectedFilterDate == 'ALL' )
         {quo= [SELECT Order_Date__c, Id, Name, Platform_Name__c, Product__c,  related_product__r.Country__c,related_product__r.SKU__c, related_product__r.Enabled__c, related_product__r.Default_provider__c, related_product__r.Default_provider__r.Name, related_product__c, related_product__r.Tipo_producto__c,related_product__r.Brand__c,related_product__r.Brand__r.Name,related_product__r.Tax__c, related_product__r.CurrencyIsoCode, related_product__r.Cost__c, related_product__r.Discount__c, related_product__r.Discounted_cost__c, Quantity__c  FROM Rew_Order__c where Status__c = 'Pending' and Purchase_Order__c = ''  and  Platform_Name__c LIKE :filter1 and  related_product__r.Default_provider__r.Name LIKE :filter2 and related_product__r.Tipo_producto__c LIKE :selectedVendorId and related_product__r.Country__c LIKE :Ctry and related_product__r.CurrencyIsoCode LIKE :Ccy  Order by Order_Date__c];}
         if(selectedCCYId != null && selectedFilterDate == 'TODAY')  
         {quo= [SELECT Order_Date__c, Id, Name, Platform_Name__c, Product__c,  related_product__r.Country__c,related_product__r.SKU__c, related_product__r.Enabled__c, related_product__r.Default_provider__c, related_product__r.Default_provider__r.Name, related_product__c, related_product__r.Tipo_producto__c,related_product__r.Brand__c,related_product__r.Brand__r.Name,related_product__r.Tax__c, related_product__r.CurrencyIsoCode, related_product__r.Cost__c, related_product__r.Discount__c, related_product__r.Discounted_cost__c, Quantity__c  FROM Rew_Order__c where Status__c = 'Pending' and Purchase_Order__c = ''  and  Platform_Name__c LIKE :filter1 and  related_product__r.Default_provider__r.Name LIKE :filter2 and related_product__r.Tipo_producto__c LIKE :selectedVendorId and related_product__r.Country__c LIKE :Ctry and related_product__r.CurrencyIsoCode LIKE :selectedCCYId and Order_Date__c = TODAY Order by Order_Date__c];}
         if(selectedCCYId == null && selectedFilterDate == 'TODAY')  
         {quo= [SELECT Order_Date__c, Id, Name, Platform_Name__c, Product__c,  related_product__r.Country__c,related_product__r.SKU__c, related_product__r.Enabled__c, related_product__r.Default_provider__c, related_product__r.Default_provider__r.Name, related_product__c, related_product__r.Tipo_producto__c,related_product__r.Brand__c,related_product__r.Brand__r.Name,related_product__r.Tax__c, related_product__r.CurrencyIsoCode, related_product__r.Cost__c, related_product__r.Discount__c, related_product__r.Discounted_cost__c, Quantity__c  FROM Rew_Order__c where Status__c = 'Pending' and Purchase_Order__c = ''  and  Platform_Name__c LIKE :filter1 and  related_product__r.Default_provider__r.Name LIKE :filter2 and related_product__r.Tipo_producto__c LIKE :selectedVendorId and related_product__r.Country__c LIKE :Ctry and related_product__r.CurrencyIsoCode LIKE :Ccy and Order_Date__c = TODAY Order by Order_Date__c];}          
         if(selectedCCYId != null && selectedFilterDate == 'THIS_MONTH')  
         {quo= [SELECT Order_Date__c, Id, Name, Platform_Name__c, Product__c,  related_product__r.Country__c,related_product__r.SKU__c, related_product__r.Enabled__c, related_product__r.Default_provider__c, related_product__r.Default_provider__r.Name, related_product__c, related_product__r.Tipo_producto__c,related_product__r.Brand__c,related_product__r.Brand__r.Name,related_product__r.Tax__c, related_product__r.CurrencyIsoCode, related_product__r.Cost__c, related_product__r.Discount__c, related_product__r.Discounted_cost__c, Quantity__c  FROM Rew_Order__c where Status__c = 'Pending' and Purchase_Order__c = ''  and  Platform_Name__c LIKE :filter1 and  related_product__r.Default_provider__r.Name LIKE :filter2 and related_product__r.Tipo_producto__c LIKE :selectedVendorId and related_product__r.Country__c LIKE :Ctry and related_product__r.CurrencyIsoCode LIKE :selectedCCYId and Order_Date__c = THIS_MONTH Order by Order_Date__c];}
         if(selectedCCYId == null && selectedFilterDate == 'THIS_MONTH')  
         {quo= [SELECT Order_Date__c, Id, Name, Platform_Name__c, Product__c,  related_product__r.Country__c,related_product__r.SKU__c, related_product__r.Enabled__c, related_product__r.Default_provider__c, related_product__r.Default_provider__r.Name, related_product__c, related_product__r.Tipo_producto__c,related_product__r.Brand__c,related_product__r.Brand__r.Name,related_product__r.Tax__c, related_product__r.CurrencyIsoCode, related_product__r.Cost__c, related_product__r.Discount__c, related_product__r.Discounted_cost__c, Quantity__c  FROM Rew_Order__c where Status__c = 'Pending' and Purchase_Order__c = ''  and  Platform_Name__c LIKE :filter1 and  related_product__r.Default_provider__r.Name LIKE :filter2 and related_product__r.Tipo_producto__c LIKE :selectedVendorId and related_product__r.Country__c LIKE :Ctry and related_product__r.CurrencyIsoCode LIKE :Ccy and Order_Date__c = THIS_MONTH Order by Order_Date__c];}          
         if(selectedCCYId != null && selectedFilterDate == 'LAST_MONTH')  
         {quo= [SELECT Order_Date__c, Id, Name, Platform_Name__c, Product__c,  related_product__r.Country__c,related_product__r.SKU__c, related_product__r.Enabled__c, related_product__r.Default_provider__c, related_product__r.Default_provider__r.Name, related_product__c, related_product__r.Tipo_producto__c,related_product__r.Brand__c,related_product__r.Brand__r.Name,related_product__r.Tax__c, related_product__r.CurrencyIsoCode, related_product__r.Cost__c, related_product__r.Discount__c, related_product__r.Discounted_cost__c, Quantity__c  FROM Rew_Order__c where Status__c = 'Pending' and Purchase_Order__c = ''  and  Platform_Name__c LIKE :filter1 and  related_product__r.Default_provider__r.Name LIKE :filter2 and related_product__r.Tipo_producto__c LIKE :selectedVendorId and related_product__r.Country__c LIKE :Ctry and related_product__r.CurrencyIsoCode LIKE :selectedCCYId and Order_Date__c = LAST_MONTH Order by Order_Date__c];}
         if(selectedCCYId == null && selectedFilterDate == 'LAST_MONTH')  
         {quo= [SELECT Order_Date__c, Id, Name, Platform_Name__c, Product__c,  related_product__r.Country__c,related_product__r.SKU__c, related_product__r.Enabled__c, related_product__r.Default_provider__c, related_product__r.Default_provider__r.Name, related_product__c, related_product__r.Tipo_producto__c,related_product__r.Brand__c,related_product__r.Brand__r.Name,related_product__r.Tax__c, related_product__r.CurrencyIsoCode, related_product__r.Cost__c, related_product__r.Discount__c, related_product__r.Discounted_cost__c, Quantity__c  FROM Rew_Order__c where Status__c = 'Pending' and Purchase_Order__c = ''  and  Platform_Name__c LIKE :filter1 and  related_product__r.Default_provider__r.Name LIKE :filter2 and related_product__r.Tipo_producto__c LIKE :selectedVendorId and related_product__r.Country__c LIKE :Ctry and related_product__r.CurrencyIsoCode LIKE :Ccy and Order_Date__c = LAST_MONTH Order by Order_Date__c];}          
         if(selectedCCYId != null && selectedFilterDate == 'LAST_N_MONTHS:3')  
         {quo= [SELECT Order_Date__c, Id, Name, Platform_Name__c, Product__c,  related_product__r.Country__c,related_product__r.SKU__c, related_product__r.Enabled__c, related_product__r.Default_provider__c, related_product__r.Default_provider__r.Name, related_product__c, related_product__r.Tipo_producto__c,related_product__r.Brand__c,related_product__r.Brand__r.Name,related_product__r.Tax__c, related_product__r.CurrencyIsoCode, related_product__r.Cost__c, related_product__r.Discount__c, related_product__r.Discounted_cost__c, Quantity__c  FROM Rew_Order__c where Status__c = 'Pending' and Purchase_Order__c = ''  and  Platform_Name__c LIKE :filter1 and  related_product__r.Default_provider__r.Name LIKE :filter2 and related_product__r.Tipo_producto__c LIKE :selectedVendorId and related_product__r.Country__c LIKE :Ctry and related_product__r.CurrencyIsoCode LIKE :selectedCCYId and Order_Date__c = LAST_N_MONTHS:3 Order by Order_Date__c];}
         if(selectedCCYId == null && selectedFilterDate == 'LAST_N_MONTHS:3')  

         {quo= [SELECT Order_Date__c, Id, Name, Platform_Name__c, Product__c,  related_product__r.Country__c,related_product__r.SKU__c, related_product__r.Enabled__c, related_product__r.Default_provider__c, related_product__r.Default_provider__r.Name, related_product__c, related_product__r.Tipo_producto__c,related_product__r.Brand__c,related_product__r.Brand__r.Name,related_product__r.Tax__c, related_product__r.CurrencyIsoCode, related_product__r.Cost__c, related_product__r.Discount__c, related_product__r.Discounted_cost__c, Quantity__c  FROM Rew_Order__c where Status__c = 'Pending' and Purchase_Order__c = ''  and  Platform_Name__c LIKE :filter1 and  related_product__r.Default_provider__r.Name LIKE :filter2 and related_product__r.Tipo_producto__c LIKE :selectedVendorId and related_product__r.Country__c LIKE :Ctry and related_product__r.CurrencyIsoCode LIKE :Ccy and Order_Date__c = LAST_N_MONTHS:3 Order by Order_Date__c];}          
         if(selectedCCYId != null && selectedFilterDate == 'LAST_N_MONTHS:6')  
         {quo= [SELECT Order_Date__c, Id, Name, Platform_Name__c, Product__c,  related_product__r.Country__c,related_product__r.SKU__c, related_product__r.Enabled__c, related_product__r.Default_provider__c, related_product__r.Default_provider__r.Name, related_product__c, related_product__r.Tipo_producto__c,related_product__r.Brand__c,related_product__r.Brand__r.Name,related_product__r.Tax__c, related_product__r.CurrencyIsoCode, related_product__r.Cost__c, related_product__r.Discount__c, related_product__r.Discounted_cost__c, Quantity__c  FROM Rew_Order__c where Status__c = 'Pending' and Purchase_Order__c = ''  and  Platform_Name__c LIKE :filter1 and  related_product__r.Default_provider__r.Name LIKE :filter2 and related_product__r.Tipo_producto__c LIKE :selectedVendorId and related_product__r.Country__c LIKE :Ctry and related_product__r.CurrencyIsoCode LIKE :selectedCCYId and Order_Date__c = LAST_N_MONTHS:6 Order by Order_Date__c];}
         if(selectedCCYId == null && selectedFilterDate == 'LAST_N_MONTHS:6')  

         {quo= [SELECT Order_Date__c, Id, Name, Platform_Name__c, Product__c,  related_product__r.Country__c,related_product__r.SKU__c, related_product__r.Enabled__c, related_product__r.Default_provider__c, related_product__r.Default_provider__r.Name, related_product__c, related_product__r.Tipo_producto__c,related_product__r.Brand__c,related_product__r.Brand__r.Name,related_product__r.Tax__c, related_product__r.CurrencyIsoCode, related_product__r.Cost__c, related_product__r.Discount__c, related_product__r.Discounted_cost__c, Quantity__c  FROM Rew_Order__c where Status__c = 'Pending' and Purchase_Order__c = ''  and  Platform_Name__c LIKE :filter1 and  related_product__r.Default_provider__r.Name LIKE :filter2 and related_product__r.Tipo_producto__c LIKE :selectedVendorId and related_product__r.Country__c LIKE :Ctry and related_product__r.CurrencyIsoCode LIKE :Ccy and Order_Date__c = LAST_N_MONTHS:6 Order by Order_Date__c];}                       
         if(selectedCCYId != null && selectedFilterDate == 'THIS_YEAR')  
         {quo= [SELECT Order_Date__c, Id, Name, Platform_Name__c, Product__c,  related_product__r.Country__c,related_product__r.SKU__c, related_product__r.Enabled__c, related_product__r.Default_provider__c, related_product__r.Default_provider__r.Name, related_product__c, related_product__r.Tipo_producto__c,related_product__r.Brand__c,related_product__r.Brand__r.Name,related_product__r.Tax__c, related_product__r.CurrencyIsoCode, related_product__r.Cost__c, related_product__r.Discount__c, related_product__r.Discounted_cost__c, Quantity__c  FROM Rew_Order__c where Status__c = 'Pending' and Purchase_Order__c = ''  and  Platform_Name__c LIKE :filter1 and  related_product__r.Default_provider__r.Name LIKE :filter2 and related_product__r.Tipo_producto__c LIKE :selectedVendorId and related_product__r.Country__c LIKE :Ctry and related_product__r.CurrencyIsoCode LIKE :selectedCCYId and Order_Date__c = THIS_YEAR Order by Order_Date__c];}
         if(selectedCCYId == null && selectedFilterDate == 'THIS_YEAR')  
         {quo= [SELECT Order_Date__c, Id, Name, Platform_Name__c, Product__c,  related_product__r.Country__c,related_product__r.SKU__c, related_product__r.Enabled__c, related_product__r.Default_provider__c, related_product__r.Default_provider__r.Name, related_product__c, related_product__r.Tipo_producto__c,related_product__r.Brand__c,related_product__r.Brand__r.Name,related_product__r.Tax__c, related_product__r.CurrencyIsoCode, related_product__r.Cost__c, related_product__r.Discount__c, related_product__r.Discounted_cost__c, Quantity__c  FROM Rew_Order__c where Status__c = 'Pending' and Purchase_Order__c = ''  and  Platform_Name__c LIKE :filter1 and  related_product__r.Default_provider__r.Name LIKE :filter2 and related_product__r.Tipo_producto__c LIKE :selectedVendorId and related_product__r.Country__c LIKE :Ctry and related_product__r.CurrencyIsoCode LIKE :Ccy and Order_Date__c = THIS_YEAR Order by Order_Date__c];}                       
         if(selectedCCYId != null && selectedFilterDate == 'LAST_YEAR')  
         {quo= [SELECT Order_Date__c, Id, Name, Platform_Name__c, Product__c,  related_product__r.Country__c,related_product__r.SKU__c, related_product__r.Enabled__c, related_product__r.Default_provider__c, related_product__r.Default_provider__r.Name, related_product__c, related_product__r.Tipo_producto__c,related_product__r.Brand__c,related_product__r.Brand__r.Name,related_product__r.Tax__c, related_product__r.CurrencyIsoCode, related_product__r.Cost__c, related_product__r.Discount__c, related_product__r.Discounted_cost__c, Quantity__c  FROM Rew_Order__c where Status__c = 'Pending' and Purchase_Order__c = ''  and  Platform_Name__c LIKE :filter1 and  related_product__r.Default_provider__r.Name LIKE :filter2 and related_product__r.Tipo_producto__c LIKE :selectedVendorId and related_product__r.Country__c LIKE :Ctry and related_product__r.CurrencyIsoCode LIKE :selectedCCYId and Order_Date__c = LAST_YEAR Order by Order_Date__c];}
         if(selectedCCYId == null && selectedFilterDate == 'LAST_YEAR')  
         {quo= [SELECT Order_Date__c, Id, Name, Platform_Name__c, Product__c,  related_product__r.Country__c,related_product__r.SKU__c, related_product__r.Enabled__c, related_product__r.Default_provider__c, related_product__r.Default_provider__r.Name, related_product__c, related_product__r.Tipo_producto__c,related_product__r.Brand__c,related_product__r.Brand__r.Name,related_product__r.Tax__c, related_product__r.CurrencyIsoCode, related_product__r.Cost__c, related_product__r.Discount__c, related_product__r.Discounted_cost__c, Quantity__c  FROM Rew_Order__c where Status__c = 'Pending' and Purchase_Order__c = ''  and  Platform_Name__c LIKE :filter1 and  related_product__r.Default_provider__r.Name LIKE :filter2 and related_product__r.Tipo_producto__c LIKE :selectedVendorId and related_product__r.Country__c LIKE :Ctry and related_product__r.CurrencyIsoCode LIKE :Ccy and Order_Date__c = LAST_YEAR Order by Order_Date__c];}                              

         }

Could you please help me to create the following test class??

I thing so i need yo put test valuest to filters and run.

Regards