dimanche 31 mai 2015

Backbone integration tests: looking for helpers like Ember's

I've been looking at Ember's test helpers for integration testing and some example tests like here. I would like to write tests for a Backbone.js app I am maintaining in a similar manner. I have looked at tools such as Chai and Sinon and got the feeling that these produce a much more convoluted testing code and are hard to maintain.

  • Are there any similar libraries such as the Ember helpers for Backbone
  • Is there any reason why no one has written such helpers?

How to read a text from a page using Helium?

enter image description here

I want to read the text showing in the picture and want to print that text using helium. I tried the following code

import org.openqa.selenium.WebDriver;

//import com.heliumhq.selenium_wrappers.WebDriverWrapper;

import static com.heliumhq.API.*;


public class mainClass {

    public static void main(String[] args) {
        WebDriver ff = startFirefox("http://ift.tt/1JiIdet");
        waitUntil(Text("Streams Tech, Inc.").exists);

        streamstech lg = new streamstech();
        lg.login(ff);
         if (getDriver().getValue().contains("You have not purchased any product yet. Please visit our product list to try out different products!"))
                System.out.println("You have not purchased any product yet. Please visit our product list to try out different products!");
            else
                System.out.println("Test failed :(");
        //String text = getValue("You have not purchased any product yet. Please visit our product list to try out different products!");
        killBrowser();



    }

}

But for getValue() I am getting an error "The method getValue() is undefined for the type WebDriver" Appreciate your help. Thanks in advance.

Count Sort Algorithm, Unable to sort Perfectally

I been trying to Implement Count Sort Algorithm

Every time i run the Algorith It gives me wrong answer at "Highest & lowest Value" and At Index 1

It is been continuous 20 hours, & I am unable to track what i am doing wrong...

Generated_Array 90  27  58  111 105 39  24  144 19  91  38  109 39  70  177 70  122 80  75  115 
Sorted_Array    0   19  24  27  38  39  39  58  70  70  75  90  91  105 109 111 115 122 144 0   

Generated_Array 142 67  159 142 41  181 135 159 76  175 161 70  94  131 113 186 102 28  104 80  
Sorted_Array    0   186 41  67  70  76  80  94  102 104 113 131 142 142 159 159 161 175 181 0

Generated_Array 18  176 9   118 90  34  147 6   93  63  82  58  27  192 126 135 173 114 138 101 
Sorted_Array    0   6   18  27  34  58  63  82  90  93  101 114 118 126 135 138 147 173 176 0

Generated_Array 78  173 131 22  71  61  79  198 128 15  163 138 74  144 96  26  35  192 141 87  
Sorted_Array    0   198 22  26  35  61  71  78  79  87  96  128 131 138 141 144 163 173 192 0   

Generated_Array 29  139 81  81  65  12  164 76  119 95  164 41  125 184 144 59  179 143 89  33  
Sorted_Array    0   29  33  41  59  65  76  81  81  89  95  119 125 139 143 144 164 164 179 0   

Generated_Array 42  161 157 170 123 163 8   31  124 169 79  7   189 98  133 147 105 57  133 132 
Sorted_Array    0   7   8   42  57  79  98  105 123 124 132 133 133 147 157 161 163 169 170 0

Here is the algorithm I am using

int[] Counting_sort(int[] Array, int Max)
        {
            int No_Of_Elements = Array.Length;
            int[] Sorted_Array = new int[Array.Length];
            int[] C = new int[Max+1];

            for (int i = 0; i < Max; i++)
            {
                C[i] = 0; 
            }

            for (int j = 1; j <No_Of_Elements; j++)
            {
                C[Array[j]] = C[Array[j]] + 1;
            }

            for (int i = 1; i <Max; i++)
            {
                C[i] = C[i] + C[i - 1];
            }

            for (int j = No_Of_Elements-1; j >= 0; j--)
            {
                Sorted_Array[C[Array[j]]] = Array[j];
                C[Array[j]] = C[Array[j]] - 1;
            }
            return Sorted_Array;
        }

Running xunit tests on tests.dll files in the test folders

I'm using the xunit.console.exe to run the tests.dll in the test folder. I can do this successfully for individual tests by calling the console.exe path and running it in the test folder. I'm trying to figure out how to do for different tests.dll files located in different folders under the main tests folder. I saw one solution was to copy the xunit.console.exe to the bin tests folder and run it there. I saw another claiming to call both the paths of the conosle.exe and the tests.dll files and run them. I'm trying to figure out a simple way to do this that I can understand. Anyway help or direction is appreciated. Thanks.

Testing multiple classes with 1 test

Say I have an interface with 2 classes that both implement it. So we have interface I, with classes A and B. For these 2 classes, we need to test the same implemented function, doSomething() with JUnit4. There's some dependencies, so we're using Mockito. An example test looks like:

@Mock private dependantClass d;

@Test
public void test() {
    A.doSomething();
    verify(d).expectedBehavior();
}

I've written the test suite for A (4 tests), no problems. However, now I have to restructure the test suite so I can execute the same test class on both A and B objects. For this, I should use a parallel class hierarchy.

This has left me stumped. I've tried using the @Parameters annotation, but this gives me the error that I have too many input arguments. I've tried making a super test class that both ATest and BTest extend from, but I'm guessig I'm not doing it right because I get nullpointer exceptions.

Actually copying all test cases and just changing A to B passes all the tests, that's to say that these 2 classes do the same. I realize that that sounds like faulty design, and to be honest, it probably is. However, I do not have the possibility to actually alter the code, I just have to test it.

Am I just doing things wrong? How should I implement this?

Transferring large application to Android Wear through Android Studio

I am developing a large application for Android Wear through Android Studio (~200 MB). Trying to test the application on my LG G Watch R through "Debugging over Bluetooth" is taking a lot of time to send the large app to the Watch.

Are there any alternatives / faster methods to send the application to the Watch for testing?

Thank you.

How do I test my Camel Route if I have a choice operation?

If I have a Camel route that has implemented a Content based Routing EIP(Choice operation). How do I test it. I'm new to Camel. So, I'm unsure how to do it. I have mentioned a sample code below that has to be tested.

public void configure() throws Exception 
{   
    onException(Exception.class).handled(true).bean(ErrorHandler.class).stop();

    from("{{input}}?concurrentConsumers=10")
    .routeId("Actions")
        .choice()
            .when().simple("${header.Action} == ${type:status1}")
                .bean(Class, "method1")
            .when().simple("${header.Action} == ${type:status2}")
                .bean(Class, "method2")
            .when().simple("${header.Action} == ${type:status3}")
                .bean(Class, "method3")
            .otherwise()
                .bean(Class, "method4")
        .end();       
}

}

Thanks, Gautham

What are the methods available in Mathematics or Applied Mathematics to find dependency among two or more entities

I want to used it to calculate dependency beetween two modules in an integrated software system.

Meteor: recommended end 2 end testing tools

Which tools would you recommend for end 2 end meteor application testing?

Requ:

*simulation of clients

*subscriptions

*Meteor calls

*waiting time between thread start

*waiting time between calls

Run Django tests PyCharm with coverage

I'm quite a beginner with Django, especially with testing. Since it is a best practice, I hope I can get this up and running...

I just started a project (called leden), and made my first testfile test_initial.py.

class test_LidViewTests(TestCase):
    def setUp(self):
        self.user = User.objects.create_user(username='jacob', email='jacob@pils.com', password='top_secret')
        self.client.login(username='jacob', password='top_secret')

def test_view_non_existing_lid(self):
        response = self.client.get(reverse('leden:lid', kwargs={'lid_id': 1}))
        self.assertEqual(response.status_code, 404)

When I run the tests with the command python manage.py test, all tests are run. When I try to run my tests in PyCharm however (I used this tutorial), I get the following errors:

/home/mathijs/.virtualenvs/ledenbestand/bin/python3.4 /opt/pycharm-3.4/helpers/pycharm/django_test_manage.py test leden.tests /home/mathijs/Development/ledenbestand
Testing started at 17:00 ...
/home/mathijs/.virtualenvs/ledenbestand/lib/python3.4/importlib/_bootstrap.py:321: RemovedInDjango19Warning: django.utils.unittest will be removed in Django 1.9.
  return f(*args, **kwds)

/home/mathijs/.virtualenvs/ledenbestand/lib/python3.4/importlib/_bootstrap.py:321: RemovedInDjango19Warning: django.utils.unittest will be removed in Django 1.9.
  return f(*args, **kwds)

Traceback (most recent call last):
  File "/opt/pycharm-3.4/helpers/pycharm/django_test_manage.py", line 127, in <module>
    utility.execute()
  File "/opt/pycharm-3.4/helpers/pycharm/django_test_manage.py", line 102, in execute
    PycharmTestCommand().run_from_argv(self.argv)
  File "/home/mathijs/.virtualenvs/ledenbestand/lib/python3.4/site-packages/django/core/management/commands/test.py", line 30, in run_from_argv
    super(Command, self).run_from_argv(argv)
  File "/home/mathijs/.virtualenvs/ledenbestand/lib/python3.4/site-packages/django/core/management/base.py", line 390, in run_from_argv
    self.execute(*args, **cmd_options)
  File "/home/mathijs/.virtualenvs/ledenbestand/lib/python3.4/site-packages/django/core/management/commands/test.py", line 74, in execute
    super(Command, self).execute(*args, **options)
  File "/home/mathijs/.virtualenvs/ledenbestand/lib/python3.4/site-packages/django/core/management/base.py", line 441, in execute
    output = self.handle(*args, **options)
  File "/opt/pycharm-3.4/helpers/pycharm/django_test_manage.py", line 89, in handle
    failures = TestRunner(test_labels, verbosity=verbosity, interactive=interactive, failfast=failfast)
  File "/opt/pycharm-3.4/helpers/pycharm/django_test_runner.py", line 228, in run_tests
    extra_tests=extra_tests, **options)
  File "/opt/pycharm-3.4/helpers/pycharm/django_test_runner.py", line 128, in run_tests
    return super(DjangoTeamcityTestRunner, self).run_tests(test_labels, extra_tests, **kwargs)
AttributeError: 'super' object has no attribute 'run_tests'

Do you guys have any idea how I can fix this?

Looking for an easy way to test web site on different devices

Does someone know a tool for testing a site on all devices and browsers , skipping the irritating procedure of retyping the URL over and over again on every browser of every device ?

samedi 30 mai 2015

XMAX - XOR Maximization getting wrong answer

I am trying to solve XMAX-XOR Maximization problem on spoj.Here is my code-

#include <stdio.h>
struct node{
    int value;
    struct node *left;
    struct node *right;
};
void insert(int n, int pos, struct node *t);
int find(int a[], int n);
struct node *alloc();
int findmax(int n, int p, struct node *a);


int main()
{

    int n;
    scanf("%d", &n);
    int a[100000];
    int i;
    for (i = 0; i < n; i++)
        scanf("%d", &a[i]);
    int max = find(a, n);
    printf("%d\n", max);

    return 0;
}
void insert(int n, int pos, struct node *t)
{
    if (pos >= 0)
    {
        struct node  *m;
        int bit = (1 << pos)&n;
        if (bit)
        {


            if (t->right == NULL)
            {
                m = alloc();
                m->value = 1;
                m->left = NULL;
                m->right = NULL;
                t->right = m;
            }


            if (pos == 0)
            {
                m = alloc();
                m->value = n;
                m->left = NULL;
                m->right = NULL;
                t->right->left = m;
            }


            insert(n, pos - 1, t->right);
        }
        else
        {


            if (t->left == NULL)
            {
                m = alloc();
                m->value = 0;
                m->left = NULL;
                m->right = NULL;
                t->left = m;
            }


            if (pos == 0)
            {
                m = alloc();
                m->value = n;
                m->left = NULL;
                m->right = NULL;
                t->left->left = m;
            }

            insert(n, pos - 1, t->left);
        }
    }
}

struct node *alloc()
{
    return (struct node *) malloc(sizeof(struct node));
}

int find(int a[], int n)
{
    int ans = 0;
    int z = 0;
    struct node root;
    root.value = 0;
    root.left = root.right = NULL;
    insert(0,31, &root);
    int i;
    for (i = 0; i < n; i++)
    {
        z = z^a[i];
        insert(z, 31, &root);
        int v = findmax(z,31, &root);
        ans = (ans>v) ? ans : v;

    }
    return ans;
}
int findmax(int n, int p, struct node *a)
{
    if (p >= 0)
    {

        int bit = (1 << p)&n;
        if (bit)
        {


            if (a->left != NULL)
            {
                return findmax(n, p - 1, a->left);
            }
            else
                return findmax(n, p - 1, a->right);

        }
        else
        {

            if (a->right != NULL)
            {
                return findmax(n, p - 1, a->right);
            }
            else
                return findmax(n, p - 1, a->left);
        }

    }
    else
        return (a->left->value) ^ n;
}

I checked a number of test cases and got the right output but I am getting a wrong answer on submission.Please help me find where this code is getting failed. I used a trie and the property that F(L,R)=F(1,R) XOR F(1,L-1) to solve this question where F(L,R) is XOR of subarray from L to R.

Right directory structure for testing

I'm new to Python testing and I want to set up an structure similar to the one you can have when testing Ruby with RSpec. So I followed this blog, but it's not working for me. My current structure is:

/brute-force
    /source
        __init__.py
        graph.py
    /tests
        __init__.py
        test_graph.py

When running nosetests on the root brute-force directory, it says that 0 tests were run.

Call Method through two classes in Selenium C#

This is probably a stupid question but I haven't found an answer that leads me to a solution yet

Say I have a testmethod to verify the functionality of a login portal. It's in TestClassA. I want to run that method in TestClassB's TestInitialize method so I can reliably have selenium start on a blank slate for testing features past that login portal.

Here's the test code in question

using Test_Login_Elements;
using Dashboard;

namespace Test_Dashboard_Elements
{
[TestClass]
public class DashboardTests
{
    IWebDriver _driver;
    DashboardElements dash;

    [TestInitialize]
    public void Test_Setup()
    {
        dash = new DashboardElements(_driver);
        _driver = new FirefoxDriver();
        _driver.Navigate().GoToUrl("exampleurl.com/login");
        dash.Login();
    }
}

Which calls an instance of DashboardElements and passes the selenium webdriver, then calls the login method from DashboardElements (DashboardElements has LoginPage as a reference, by the way)

    public void Login()
    {
        LoginPage login = new LoginPage(_driver);
        login.sendUserName("example_user");
        login.sendPassword("example_password");
        login.submit();
    }

This returns Message: Initialization method Test_Dashboard_Elements.DashboardTests.Test_Setup threw exception. System.ArgumentNullException: System.ArgumentNullException: searchContext may not be null Parameter name: searchContext

I feel like this has to do with passing _driver twice, once through the instance inside the TestInitialize and again in the DashboardElements login method, but I have no idea how else to do this.

How do I know that the Flask application is ready to process the requests without actually polling it?

During a smoke test, I want to ensure that a Flask application is handling correctly a few basic requests. This involves starting the Flask application asynchronously:

class TestSmoke(unittest.TestCase):
    @staticmethod
    def run_server():
        app.run(port=49201)

    @classmethod
    def setUpClass(cls):
        cls.flaskProcess = multiprocessing.Process(target=TestSmoke.run_server)
        cls.flaskProcess.start()

and then run the tests which perform the requests with requests library.

If the code is left as is, the tests are often run before the server is actually started, which results in ConnectionRefusedError. To prevent this from happening, I appended the following code to setUpClass:

while True:
    try:
        requests.get("http://localhost:49201/", timeout=0.5)
        return
    except requests.exceptions.ConnectionError:
        pass

While this works, it looks ugly. Given that the test case is in control of the Flask application, there should be a way for it to be notified once the application is ready to process the requests. Unfortunately, the closest thing I've found is got_first_request, which doesn't help (unless once again I'm polling the server).

How is it possible to determine that the Flask application started and is ready to process the requests when running it asynchronously?

Rust: Setting environment variables

I am attempting to write tests for my Rust program. Normally, these tests are run in parallel but I want to run them sequentially. I looked around and I can set this environment variable RUST_TEST_TASKS=1, but I am not sure where to do that. Can someone please provide some insight into setting environment variables in Rust please?

How to use Node.js to compare text and image files for equality?

I'm writing a spec using Jasmine Node and I'd like to compare files on disk with files in memory to check that they're the same. Most of the files are text, but one is an image file (PNG).

How can this can be done? Do you know of a utility library that helps with this?

MeteorDown: Wait time between thread start

meteorDown.run({
  concurrency: 10,
  url: "http://localhost:3000",
  key: 'YOUR_SUPER_SECRET_KEY',
  auth: {userIds: ['JydhwL4cCRWvt3TiY', 'bg9MZZwFSf8EsFJM4']}
})

Is it possible to invoke the connections with a delay?

vendredi 29 mai 2015

Google Wallet Api test credentials

Is 'Pay by Google Wallet' functionality using Wallet Object API can not be accessed without creating and approving Google Merchant Account. I am developing a jquery widget for US based client. I need sandbox test credentials to integrate and test Wallet api. Please suggest any link where I could get test credentials. Wallet api link is not appearing in google api's list.

I'm new to automation in .net with Coded UI

I'm recently employed for Tech company. They didn't have QA team, they hired few testers with no coding knowledge. After extensive training. I have been learning about CodedUI in Pluralsight. Although has it own disadvantages in record and play option. i managed to create Page Object Model UIMAP test cases for acceptance testing in Visual Studio 2013. I'm having issues on how to run test them outside application with Batch files and create a report. Please help Thanks

How to display a short test report/counters in travis-ci?

I mean, it would be very useful if I can see how many tests passed/failed just by one line, without reading build logs.

I use karma as test runner. It have a lot of reporter, but which one should I use?

Example from TeamCity: TeamCity tests counter

Minimal set of test cases with modified condition/decision coverage

I have a question regarding modified condition/decision coverage that I can't figure out.

So if I have the expression ((A || B) && C) and the task is with a minimal number of test cases receive 100% MD/DC.

I break it down into two parts with the minimal number of test cases for (A || B) and (X && C).

(A || B) : {F, F} = F, {F, T} = T, {T, -} = T
(X && C) : {F, -} = F, {T, F} = F, {T, T} = T

The '-' means that it doesn't matter which value they are since they won't be evaluated by the compiler.

So when I combine these I get this as my minimal set of test cases:

((A || B) && C) : {{F, F}, -} = F, {{F, T}, F} = F, {{T, -}, T} = T

But when I google it this is also in the set: {{F, T}, T} = T Which I do not agree on because I tested the parts of this set separately in the other tests, didn't I?

So I seem to miss what the fourth test case adds to the set and it would be great if someone could explain why I must have it?

How to pass run time parameters accepted from user to a TestNG test case?

I have a swing frame which accepts some input from user, ex: username, password.

I want to pass that input to a TestNG test case or testng.xml. I can't pass hardcoded parameters using @Parameter, @DataProvider.

Add test package to existing Android Studio project

I have an existing android app and I'm trying to implement some junit tests. Normally (with Android Studio 1.2) the test package is created automatically. If I try to manually create folders to mimic the structure I've seen elsewhere there is either no option to create package under those folders, or I can't name it what I should be able to without it being placed under the existing package. Anyone know how to properly add this just as it would have been when auto created?

Every article covering this assumes its already there... like this one: http://ift.tt/1J9fbzb

missing test package

Jest, Jasmine mock window.require

I'm using React and writing some tests using Jest. So since this is an Electron app, in one of my react components I do have:

var remote = window.require('remote');
var shell = remote.require('shell');

Because of the above, the tests are failing with Object [object global] has no method 'require'. So I'm trying to find a way to mock window.request. I thought that something like the code below would mock it but it does not. Any ideas?

var window = function () { return { require: function () { return {}; } }; };

module.exports = window;

How to test internal API failure

Does anyone know of a good framework, or technique, to test API failures generated by internal system failures, as opposed to external input errors?

I feel like this is a standard thing to do, and I've heard people talk about doing this in an automated way, but I just don't know where to start.

For some specifics, I'm working in Java Tomcat and Spring Boot applications, using external API tests.

Thank you

Flask server returns 404 when attempting unit tests

I'm trying to start doing unit testing on my flask app before I add any more functionality, and I've been stuck at the starting line for longer than I'd like to admit. I'm using Flask-Testing (latest master from their repo). I'm also using LiveServerTestCase because I'd like to use selenium live testing. An example of the setup guide was provided on the flask docs website which seemed simple enough.

I have tried so many code variations that have not worked but I'll post the most recent iteration.

Here is the error message that I keep getting:

(venv)username@username-VM64:~/git/approot$ python ./seleniumtests.py
 * Running on http://127.0.0.1:8943/ (Press CTRL+C to quit)
127.0.0.1 - - [29/May/2015 20:01:12] "GET / HTTP/1.1" 404 -
127.0.0.1 - - [29/May/2015 20:01:13] "GET / HTTP/1.1" 404 -
127.0.0.1 - - [29/May/2015 20:01:14] "GET / HTTP/1.1" 404 -
127.0.0.1 - - [29/May/2015 20:01:15] "GET / HTTP/1.1" 404 -
127.0.0.1 - - [29/May/2015 20:01:16] "GET / HTTP/1.1" 404 -
127.0.0.1 - - [29/May/2015 20:01:17] "GET / HTTP/1.1" 404 -
E
======================================================================
ERROR: test_server_is_up_and_running (__main__.BaseTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "./seleniumtests.py", line 23, in test_server_is_up_and_running
    response = urllib2.urlopen(self.get_server_url())
  File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
    return _opener.open(url, data, timeout)
  File "/usr/lib/python2.7/urllib2.py", line 410, in open
    response = meth(req, response)
  File "/usr/lib/python2.7/urllib2.py", line 523, in http_response
    'http', request, response, code, msg, hdrs)
  File "/usr/lib/python2.7/urllib2.py", line 448, in error
    return self._call_chain(*args)
  File "/usr/lib/python2.7/urllib2.py", line 382, in _call_chain
    result = func(*args)
  File "/usr/lib/python2.7/urllib2.py", line 531, in http_error_default
    raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 404: NOT FOUND

----------------------------------------------------------------------
Ran 1 test in 5.093s

FAILED (errors=1)

This is just the basic server test, and it appears that the app instance doesn't exist even though it looks like its running at the start.

Here is the seleniumtests.py

import urllib2, os, unittest
from app import models, db, app, views
from flask import Flask
from flask_testing import LiveServerTestCase

class BaseTestCase(LiveServerTestCase):

    def create_app(self):
        app = Flask(__name__)
        app.config.from_object('config.TestConfiguration')
        return app

    def test_server_is_up_and_running(self):
        response = urllib2.urlopen(self.get_server_url())
        self.assertEqual(response.code, 200)

    def setUp(self):
        db.create_all()

    def tearDown(self):
        db.session.remove()
        db.drop_all()



if __name__ == '__main__':
    unittest.main()

this is the config i'm using stored in config.py

class TestConfiguration(BaseConfiguration):
    TESTING = True
    DEBUG = True
    WTF_CSRF_ENABLED = False
    SQLALCHEMY_DATABASE_URI = 'sqlite:///:memory:'
    LIVESERVER_PORT = 8943

This is what my folder structure looks like, if that helps.

.
├── app
│   ├── forms.py
│   ├── __init__.py
│   ├── models.py
│   ├── momentjs.py
│   ├── static
│   ├── templates
│   └── views.py
├── config.py
├── __init__.py
├── README.md
├── requirements
├── run.py
├── seleniumtests.py
├── tmp
│   ├── cover
│   └── coverage
├── unittests.py
└── venv

Any insight would be helpful. I was finally getting the hang of flask (and feeling good about it) until I hit automated testing.

What is single fault assumption theory in testing?

The single fault assumption is the assumption that failures are only rarely the result of the simultaneous occurrence of two (or more) faults.

Can anyone explain with an example in real life as well as through any program?

How to test django's front-end javascripts

I'm trying to test my django powered website's backbone.js front-end.

I found out that karma.js and jasmine are for those frontend testing, but it seems like they are just for "front-end", which means they cannot test interactions between front-end backbone models and my django REST API server.

What I'm looking for is an testing framework that can test front-end javascripts in a BDD style (including interactions between front-end and back-end).

Is there any sharp tools for that?

Spring controller tests with mocks

So i'm having some issues coming up with a solution for a test, here's what i have so far.

This is the method i want to test(i'm new to this) this clears all fields on a web page each time it's loaded.

@RequestMapping("/addaddressform")
public String addAddressForm(HttpSession session)
{
    session.removeAttribute("firstname");
    session.removeAttribute("surname");
    session.removeAttribute("phone");
    session.removeAttribute("workno");
    session.removeAttribute("homeno");
    session.removeAttribute("address");
    session.removeAttribute("email");

    return "simpleforms/addContact";
}

and here's what i have so far for the test package ControllerTests;

import java.text.AttributedCharacterIterator.Attribute;

import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.mock.web.MockHttpSession;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import org.springframework.test.context.web.WebAppConfiguration;
import org.springframework.test.web.servlet.MockMvc;
import org.springframework.test.web.servlet.MvcResult;
   import                   org.springframework.test.web.servlet.request.MockHttpServletRequestBuilder;
   import org.springframework.test.web.servlet.request.MockMvcRequestBuilders;
   import org.springframework.test.web.servlet.result.MockMvcResultHandlers;
   import org.springframework.test.web.servlet.setup.MockMvcBuilders;
   import org.springframework.web.context.WebApplicationContext;

import static     org.springframework.test.web.servlet.request.MockMvcRequestBuilders.get;
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.status;


   @RunWith(SpringJUnit4ClassRunner.class)
   @WebAppConfiguration
   @ContextConfiguration (classes = {SimpleFormsControllerTest.class})

public class SimpleFormsControllerTest {
  @Autowired
  private WebApplicationContext wac;
enter code here
  private MockMvc mockMvc;

  @Before
  public void setup() {
    this.mockMvc = MockMvcBuilders.webAppContextSetup(this.wac).build();
  }

    @Test
    public void addAddressForm_ExistingContactDetailsInForm_RemovalFromSession()          throws     Exception{

    MockHttpSession mockSession = new MockHttpSession();

    mockSession.putValue("firstname", "test");
    mockSession.putValue("surname", "test");
    mockSession.putValue("phone", "test");
    mockSession.putValue("workno", "test");
    mockSession.putValue("homeno", "test");
    mockSession.putValue("address", "test");
    mockSession.putValue("email", "test");

                  mockMvc.perform(get("simpleForms/addaddressform").session(mockSession));

}

as this is the first time i've ever had to do this kind of thing i really don't have much clue where to go with this.

Wait for user action

I am looking for a solution, if it is possible to wait for the data entered by the user in protractor.

I mean test stop for a while and I can enter some value and then these data are used in further tests.

I tried to use javascript prompt, but I did not do much, maybe it is possible to enter data in OS terminal?

Please give me an example if it is possible.

Force test on extension of abstract class

I have an abstract class that has good test coverage. I want to make sure that any extensions of that class also pass the tests of the abstract class. Is there any way to ensure this with code using JUnit?

I can't rename my transaction on HP Virtual User Generator Script

I can't rename my transaction which is on this function:

lr_start_transaction((char*)MDM_GET_ASSOCIATIONS);

and this the script where she is coded:

//########## start the test scenario ############
web_set_max_html_param_len("8000");
web_set_sockets_option("SSL_VERSION", "TLS");
web_add_auto_header("Content-Type","application/xml");
web_add_auto_header("Accept","application/json");
web_add_auto_header("Authorization",lr_eval_string("{AUTHORIZATION}"));

//GetAssociations, NOTE: our dummy customers have often NO associations!
web_reg_save_param("RESPONSE", "LB=", "RB=", "Search=Body", LAST);
lr_start_transaction((char*)MDM_GENERIC_TRANSACTION);
lr_start_transaction((char*)MDM_GET_ASSOCIATIONS);
web_custom_request(MDM_GET_ASSOCIATIONS, 
    "URL={TEST_ENV_HOSTNAME}/api/v3/clients/{BUSINESS_CONTEXT}/customers/{GCID}/associations",
    "Method=GET", 
    "Resource=1",   // => We are retrieving a ressource, 
                    // which implies that it is not critical for the success of the script. 
                    // Any failures (HTTP 404 - Not found etc.) in downloading the resource 
                    // will be considered as warnings rather than errors.
    "EncType=application/xml", 
    "Referer=Loadrunner",
    LAST);
lr_end_transaction((char*)MDM_GET_ASSOCIATIONS, LR_AUTO);
lr_end_transaction((char*)MDM_GENERIC_TRANSACTION, LR_AUTO);

return 0;

}

PhpUnit: Run all tests which are NOT in a testsuite?

Is it possible to filter all tests which are not part of the defined test suites?

We have already defined some test suites but it may happen that sometimes tests will be written which are not covered by a testsuite...

how to test your jsf application for CSRF attacks.

I have an JSF(ADF) based application. how can i test application for CSRF and XSS ?

Is there any tool i can use it?

Visual studio 2012 Ordered test, how to control test execution

i have an ordered test that have some tests that run properly, my problem is that i can't control during the test execution witch test is running! How do i can control running execution on run time of ordered tests?

How does Ruby work with Test Automation Page Object Pattern

The key to page object pattern is a certain page can do certain things and other pages can do others things. This works well with Java's encapsulation.

However, with Ruby's flexibity I don't see how this can work: -There’s no static type checking. -Variable names are just labels. They don’t have a type associated with them. -There are no type declarations. You just assign to new variable names as-needed and they just “spring up” (i.e. a = [1,2,3] rather than int[] a = {1,2,3};). -There’s no casting. Just call the methods. Your unit tests should tell you before you even run the code if you’re going to see an exception.

Can someone explain?

How to test this function with jasmine-jquery?

This is the function I want to test, but I don't know how it works.

conflictModalCancel: function () {
    this.$(":disabled").prop("disabled", false);
},

What I have done so far:

describe("function conflictModalCancel", function(){

    beforeAll(function(){
        loadFixtures('addvehicle.html');
    });

    it ("should set property disabled", function(){
        view.conflictModalCancel();
        expect($("selectTypeWrapper")).toHaveProp("disabled");
        expect($("selectTypeWrapper").prop("disabled")).toBe(false);
    });
});

And the fixture file I am loading looks like this:

<div id="selectTypeWrapper" type="disabled" disabled="true">
    <div id=".panel">
        <div id=".panel-collapse"></div>
    </div>
</div>
<div id=":disabled"></div>
<select id="selectLanguage"></select>

I am new to all this stuff and I probably misunderstood something. Can you please help me to write this test for me to understand how to handle such problems?

jeudi 28 mai 2015

Android Exerciser Monkey won't launch

I am trying to test my android app using the Exerciser Monkey feature, following the docs I have to navigate using a terminal window to the folder: platform-tools and issue the command, but I receive the following error:

Vargas-iMac:platform-tools vedtam$ $ adb shell monkey -p foto.studio -v 500
-bash: $: command not found

I would appreciate any advice, I could not find any info about this issue around here. Thanks

Call method on variable defined in :let clause in Rspec before all test cases?

I'm working on a chess program and trying to write tests for the Board class. The top of the spec file contained the following code:

describe Board do
    let(:board)       { Board.new }
    let(:empty_board) { Board.new(empty=true) }
    ...
end

However, I read that having boolean flags for methods is a code smell because it signifies that the method is responsible for more than one thing. So, I refactored the logic in the initialize method out into two methods in the board class: create_default_board which initializes the contents of the board to the default configuration, and create_empty_board.

In the spec file, however, I can't figure out how to call these methods on board and empty_board, respectively, before the individual tests are run without having to do so within each describe block. Is there a way around this?

Overriding TestNG Summary

I do not want the default testNG summary (shown below) to show

===============================================
Simple Reporter Suite
Total tests run: 3, Failures: 1, Skips: 1
===============================================

is there any way that I can override that.

(I know that I can implement my own report by implementing the IReport interface)

Testing template based class with const template parameter which has to be varied

I am developing a template based library to support fixed point integers and I came up with this class and now I have to test it for various values of INT_BITS and FRAC_BITS. but since they are const (and they have to be so for a reason), I am unable to initialize objects with variable INT_BITS in a loop and hence it is making testing this library very difficult.

template<int INT_BITS, int FRAC_BITS>
struct fp_int
{
     public:
            static const int BIT_LENGTH = INT_BITS + FRAC_BITS;
            static const int FRAC_BITS_LENGTH = FRAC_BITS;
     private:
            // Value of the Fixed Point Integer 
            ValueType stored_val;
};

I tried a lot of tricks mentioned here , here and here. I tried using a std::vector of const int and const_cast but nothing seems to work.

I was wondering that that how do you test such libraries where the template parameter is a const for a large test values ?

How to test Jquery Mobile web App on an iPhone

I have an iPhone and a local mobile web app created on my desktop. I can't figure out how to test this web app on my iPhone. What is it I need to do to run and test the web app on the iPhone?

Accurately delaying time between KIF waitForViewWithAccessibilityLabel and tapViewWithAccessibilityLabel

Is there a way to accurately (to thousands of a second) delay the time between waiting for a view to appear waitForViewWithAccessibilityLabel and then tapping another view tapViewWithAccessibilityLabel in KIF. In my app code I set to DateTime objects on view appear and tap but this time difference does not match the delay I put in KIF.

I have tried waitForTimeInterval and also

 while(true) {
    NSTimeInterval time = [[NSDate date] timeIntervalSinceDate:dateStart];
    if(time > 1.678)
        break;
 }

but those both have errors of about 0.15 seconds. I assume that this error comes from the waitForViewWithAccessibilityLabel looking for the view to appear. Is there any way to set the start time of a the timer at the time the view is actually found? Or any other suggestions on solving this problem?

Cheers, Mo

Clicking through carousel image in protractor

I have a carousel image which varies depends on the customer GUID I am viewing. So far I have made this work. But when I put that in the for loop, it is not working.

Here is my code:

var date = element(by.css('i.icon.left-arrow'));
browser.wait(EC.elementToBeClickable(date), 30000, "Date Range is still not clickable");
date.click(); // This works but this will go back only once.

I have this for loop: To identify all the elements and click through the image. Is this correct way of identifying in angular? Please advise.

var backArrow = element.all(by.css('i.icon.left-arrow'));
for (var i=0;i<backArrow.length;i++) {
    backArrow.click();
}

Here is the element: This depends on the customer I am viewing. The images can vary anywhere between 1 to 50:

<i class="icon left-arrow"></i>

Protractor tests are passing regardless if isDisplayed is true

I have some E2E tests that are currently passing. I thought this was working as expected, however when i use browser.sleep() after inputing data into fields to modify some fields to see if the test will fail when it reaches the expected results.

Our test spec fills out a form, and upon saving the form, we navigate back home to see the name of the form to display in a grid(if the save is successful).

The expected code in our spec is as follows:

expect(element.all(mainPO.getScheduled()).isDisplayed());

Main PageObject:

this.currentScheduledCampaign = by.linkText(scheduledData.scheduledEntity.name);

this.getCurrentScheduledCampaign = function() {
     return this.currentScheduledCampaign;
}; 

scheduledEntity:

this.scheduledEntity = {
    name: 'Protractor Test' + ' ' + uuid.v4()
};

Why would .isDisplayed() returning the protractor test as a fail, even if i remove say the "Protractor Test" from the name during a browser.sleep() BEFORE saving the campaign?

I've tried running with console.log() on both the getCurrentScheduledCampaign and scheduledData.scheduledEntity.name and it seems to return the proper expected name being "Protractor Test [UUID]" and "{ using: 'link text', value: 'Protractor Test [UUID]'.

ActionView::Template::Error: undefined method `[]' for nil:NilClass

Have the following error when I run my test

 6) Error:
StaticPagesControllerTest#test_should_get_help:
ActionView::Template::Error: undefined method `[]' for nil:NilClass
 (in/home/bastien/rails/myapp/app/assets/stylesheets/application.css)
 app/views/layouts/application.html.erb:5:in
`_app_views_layouts_application_html_erb__3455207115462205235_70087307924540'
test/controllers/static_pages_controller_test.rb:11:in `block in 
<class:StaticPagesControllerTest>'

I am not sure to understand how/why this error is triggered.

The file application.css is fully commented out

The test code is very straightforward

test "should get about" do
get :about
assert_response :success
assert_select "title", "About | Myapp"
end

Same for the controller...

def about
end

... the view (about.html.erb)...

<% provide(:title, "About") %>
<h1>About</h1>

and the route

  get 'about' => 'static_pages#about'

I suspect that the issue may come from somewhere else in the application but I have no idea. Could anyone give me a hint to help debug or a methodology to follow?

Why do we need software testing? [on hold]

I have no idea on software testing but intuitively, I can not figure out the meaning of software testing and why we need to do it. In my opinion, a software should first have is function specifications(the need of the client) then the software developers write codes to fulfil the specifications. For example, in the specifications, we say that "when the client type his name in the input box, all the names of his family members appears on the screen".

So during the development, the developer can naturally write the right code to implement the function listed above.

The developers do the same thing for all the functions detailed in the software specifications(developers write codes to implement the functions and developers make sure that it works by debugging it)

So in this way, I cannot see any reason for the testing procedure. Anyone can tell me if my reasoning is wrong and explain the necessity of test by real examples

Thanks a lot!

Testing framework for Javascript (no web)

I have desktop application that has core in C++ and module business logic (some kind of "plug-ins") written using Javascript scripts. There are no web applications, sites, forms, HTML etc. Javascript is being used as "usual" desktop programming language.

I'm looking for some framework for unit testing. But when I googled, I found only frameworks targeted for web applications and which use (usually) browser(s) to perform tests.

Of course, tests can be written just in Javascript as functions and I can run them somehow manually. But I wonder, is there some existing framework for this?

testing FormDataParam with xls file

Hello i have this method that I need to test:

@POST @Path("/loadlocaleListfromxls")

@Consumes(MediaType.MULTIPART_FORM_DATA)
public Response addCountryFromXSL(@FormDataParam("file") InputStream inFile) {
    try {
        CountryListTranslator.addCountries(inFile);
    } catch (Exception e) {
        e.printStackTrace();
        return Response.status(500).build();
    }
    return Response.ok().build();
}

And here is my test:

String location = System.getProperty("/src/integration/resources/file.xml" InputStream inputStream = new FileInputStream(l

RestAssured.given().multiPart("file", inputStream).when().post("www.postfile/post") .then().assertThat().statusCode(200);

the issue is it gets exception java.lang.refect.InvocationTargetException in invoker.java. Any idea?

Thanks for your help

Why Django's assertLogs fails when it shouldn't?

So, here's a little test case to use in Django 1.8:

import logging
from django.test import TestCase

class LoggingTest(TestCase):
   def test_logging(self):
        with self.assertLogs(logging.getLogger('irk'), logging.INFO):
            logging.getLogger('irk').warning('test')

Now, what can go wrong with it? I want to check if something is logged and the only thing that I do is log something. The 'irk' logger is set to level 20 (INFO), but still I get:

======================================================================
FAIL: test_logging (core.tests.JsonDecoratorTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/vagrant/irka2/irk/core/tests.py", line 279, in test_logging
    logging.getLogger('irk').warning('test')
AssertionError: no logs of level INFO or higher triggered on irk

Any ideas what's wrong with this test case? Changing getLogger('irk') to getLogger() doesn't make any difference.

I'm using Django 1.8 and Python 3.4.

What is the best framework for unit test a web application?

What is the best framework to automate unit test of web.

That there is another framework better than funcunit ?

or there is there is the posibbility of testing with jquery ?

Mocha Test Fails with AssertionError

In JUnit (Java) the result of a unit test is either a succes, failure or error.

When i try to run a test with Mocha i either get a succes or assertion error.

Is is normally to get an AssertionError for failure tests? (shouldn't it just be called an failure and not an error?)

AssertionError: -1 == 2 + expected - actual

What about testing asynchronous code? When my tests fail i get an Uncaught eror? Is that normal?

Like this:

Uncaught Error: expected 200 to equal 201

MockMvc works with standaloneSetup but not with webAppContextSetup

I'm trying to test spring rest controller using MockMvc. There are two approaches to create MockMvc instance:

@WebAppConfiguration
@ContextConfiguration(classes = {WebConfig.class})
public class ControllerWebMvcTest extends AbstractTestNGSpringContextTests {
    @Autowired
    private WebApplicationContext webAppContext;

    private MyRestController controller;

    @BeforeMethod
    public void setUp() {
        controller = new MyRestController();
        initMocks(this);
        // first approach:
        mockMvc = MockMvcBuilders.standaloneSetup(controller).build();
        // second approach:
        mockMvc = MockMvcBuilders.webAppContextSetup(webAppContext).build();
        Assert.notNull(mockMvc, "mockMvc is null");
    }
}

when I use standaloneSetup approach, tests works fine. But for testing ExceptionHandler class annotated with @ControllerAdvice which handles exceptions for controller requires webAppContextSetup. When I start my tests with second approach I receive an Exception:

java.lang.IllegalArgumentException: json can not be null or empty

Programatically change NuGet options

How can we, using c#, programmatically change the NuGet Visual Studio option? The one that appears at Tools -> Options -> NuGet Package Manager -> General -> Automatically check for missing packages during build in Visual Studio.

var dte = EnvDTE.DTE;
var properties = dte.Properties["NuGetPackageManager", "General"];

The code above throws the following exception:

System.Runtime.InteropServices.COMException occurred
  HResult=-2147352565
  Message=Invalid index. (Exception from HRESULT: 0x8002000B (DISP_E_BADINDEX))
  Source=""
  ErrorCode=-2147352565

Some of my automated tests need the enablenugetpackagerestore switched off and I am trying to automate this and not switch off for all tests.

Test Eclipse JDT refactoring

What is the best way to unit test a JDT Eclipse plugin which performs LTK refactorings? Do any helper classes exist for this purpose?

My plugin contains a class which extends org.eclipse.ltk.core.refactoring.Refactoring and implements the methods checkInitialConditions(...), checkFinalConditions(...) and createChange(...). Moreover, I have classes implementing RefactoringContribution and RefactoringDescriptor.

However, I don't know where my tests should hook in. How can I start the refactoring from the code?

200 test cases Execution in a day manually

Anyone please let me know how a tester can test 200 test cases in a day. Actually it is an interview question.How one can do it? please let me know. Thanks

Is there a way of saving and restoring the cache in Chrome or other web browsers?

A typical way of handling a problem with a website is "it's not working, try refresh to update the cache". That's great, but then when you do work to make sure that the cache is invalidated correctly, it's difficult to check because you've already updated the cache.

Is there a way of saving the cache in a browser, e.g. Chrome and restoring it to that state at a later date? That would allow you to restore your browser to the initial conditions before the refresh and check it all works smoothly.

How can I fix my failing test?

I inherited a django project and I'm running the tests for it on 2 Ubuntu machines. On one machine it works but on this other machine a test fails and it looks like it has to do with unicode and the locale for Swedish (SE) where the word "Förberedande" is not encoded the same in the assertion:

#711      test_personal_menu.test_courses_menu ... > /usr/lib/python2.7/unittest/case.py(412)fail()
-> raise self.failureException(msg)
(Pdb) u
> /usr/lib/python2.7/unittest/case.py(726)assertSequenceEqual()
-> self.fail(msg)
(Pdb) u
> /usr/lib/python2.7/unittest/case.py(744)assertListEqual()
-> self.assertSequenceEqual(list1, list2, msg, seq_type=list)
(Pdb) list1
[(u'F\xf6rberedande kurs i matematik (SF1624)', u'http://ift.tt/1GFBieT')]
(Pdb) list2
[(u'F\u0e23\u0e16rberedande kurs i matematik (SF1624)', u'http://ift.tt/1GFBieT')]
(Pdb) 

Could you please tell me what I can do about it?

Vhdl Test Bench Unknown Syntax Error

I am trying to write a testbench but Vivado tells me that I have a Syntax error on a specific line. I am not able to realize what have I done wrong. Can anyone help.

Here is my tb code:

library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.Numeric_Std.all;

entity mmu_tb is
end mmu_tb;

architecture test of mmu_tb is

  component mmu
    port (
      virt : in std_logic_vector(15 downto 0);
      phys : out std_logic_vector(15 downto 0);
      clock   : in  std_logic;
      we      : in  std_logic;
      datain  : in  std_logic_vector(7 downto 0)
    );
  end component;

  signal virt    std_logic_vector(15 downto 0);
  signal phys    std_logic_vector(15 downto 0);
  signal clock   std_logic;
  signal we      std_logic;
  signal datain  std_logic_vector(7 downto 0);

  constant clock_period: time := 10 ns;
  signal stop_the_clock: boolean;

begin

  mmu : mmu port map ( virt   => virt,
                     phys   => phys,
                     clock  => clock,
                     we     => we,
                     datain => datain);

 stimulus : process
     begin
     -- whatever
     end process;

     clocking: process
       begin
         while not stop_the_clock loop
           clock <= '1', '0' after clock_period / 2;
           wait for clock_period ;
         end loop;
         wait;
       end process;


end test;

And here is the error I get:

[HDL 9-806] Syntax error near "std_logic_vector". ["C:/ram/ram/http://ift.tt/1G25TV5":20]

Thank you for your time.

What is wrong with my test?

I'm running through a test suite and it seems that I'm getting a local error about my locale when tested if "Förberedande" is the same:

#711      test_personal_menu.test_courses_menu ... > /usr/lib/python2.7/unittest/case.py(412)fail()
-> raise self.failureException(msg)
(Pdb) u
> /usr/lib/python2.7/unittest/case.py(726)assertSequenceEqual()
-> self.fail(msg)
(Pdb) u
> /usr/lib/python2.7/unittest/case.py(744)assertListEqual()
-> self.assertSequenceEqual(list1, list2, msg, seq_type=list)
(Pdb) list1
[(u'F\xf6rberedande kurs i matematik (SF1624)', u'http://ift.tt/1GFBieT')]
(Pdb) list2
[(u'F\u0e23\u0e16rberedande kurs i matematik (SF1624)', u'http://ift.tt/1GFBieT')]
(Pdb) 

It seems to have something to do with how python handles unicode. Can you tell me what I can do about it? The problem is local for my one Ubuntu dev machine, on my other machine the test is working and I can't tell what differs. I opened the language settings dialog on Ubuntu and tried to change so that Sedish language was prioritized but that didn't help.

mercredi 27 mai 2015

Best UI test that run without opening brawser, for java

I have general question about testing. I want to create test project for UI web testing(automates browsers testing), the requirement is:

1- The test project should not open any browser.

2- The test project should test 3 browsers(Chrome, Firefox, safari).

3- The test project most be in java language.

I have tried selenium, uncertainty it dose not work without opening browser. I tried selenium headless it also dose not work well.

Which developing environment is good for this requirements?

return of getText() cannot be compared to a string

I am using a following code to compare 2 strings in one of protractor/jasmine test cases.

emailnotsentmessage.getText().then(function(text) {
          expect(text).toBe('has not received notification about recent changes to the meeting.');
        });

where emailnotsentmessage contains following text

[ 'has not received notification about recent changes to the meeting.' ]

for some reason , the string comparison fails . those two strings contains absolute same content. i checked it several times . am i missing something here ?. the emailnotsentmessage is a content of a <span> .

error trace

1) Get to the existing meeting by navigating to the edit meeting page should display the same value which was entered du
ring create meeting when go into edit meeting
  Message:
    Expected [ 'has not received notification about recent changes to the meeting.' ] to equal 'has not received notific
ation about recent changes to the meeting.'.
  Stack:
    Error: Failed expectation

POSTMAN jetpacks TESTING within for loop

Hitting API w/ GET request, checking if item has been deleted by globals.id variable, have test inside forloop and when I run test returns 0/0 tests passed. All of my console logs within the for loop work, the objects contain values matching what I have as well. Anyone know how to do this?

var data = JSON.parse(responseBody);


for (var i = 0; i < data.length; i++){
  tests["id has been deleted"] = data[i].id !== globals.id;
  if(data[i].id !== globalID){
    tests["id has been deleted"] = data[i].id !== globals.id;
    return true;
  }
}

Android Robotium Sort Tester

I'm trying to write some code in java robitum which will test and check that my sort button is working correctly.

I do this is by activating the sort button, then iterate through the sorted array to make checks on the current value against the next value. The problem is that I'm comparing strings such as "Tenancy 8 & Tenancy 12", where it is then telling me that Tenancy 8 > Tenancy 12.

I know this is happening because software sorts based on ASCII, but I was wondering how I could get around this issue. My code is as follows:

        solo.clickLongOnView(tenancy_sort);

    for(int i = 0; i<9; i++)
    {
        TextView tenancy_current = (TextView)
                solo.getView(R.id.device_selector_row_tenancy_text,i);
        TextView tenancy_next = (TextView)
                solo.getView(R.id.device_selector_row_tenancy_text,i+1);

        String tenancy_current_text = tenancy_current.getText().toString();
        String tenancy_next_text = tenancy_next.getText().toString();

        int result = tenancy_current_text.compareTo(tenancy_next_text);
        assertTrue(result <= 0);
    }

They way I sort the array in the actual application is by using a custom sort class which contains an algorithm for natural sorting. Thanks in advance for any help, Will.

QA - Cannot input into fields with type number using Selenium 2.9 and FF 38.0.1

I know that several answers in the past on the subject recommend using legacy versions. However, I am currently in a situation where the solution is infeasible. Another person suggested disabling 'dom.forms.number'. I'm looking for an answer to either how can one disable 'dom.forms.number' permanently or how can I input numbers without reverting to past versions

Thanks!

How to get code coverage information using Node, Mocha

I've recently started getting into unit testing for my Node projects with the help of Mocha. Things are going great so far and I've found that my code has improved significantly now that I'm thinking about all the angles to cover in my tests.

Now, I'd like to share my experience with the rest of my team and get them going with their own tests. Part of the information I'd like to share is how much of my code is actually covered.

Below is a sample of my application structure which I've separated into different components, or modules. In order to promote reuse I'm trying to keep all dependencies to a minimum and isolated to the component folder. This includes keeping tests isolated as well instead of the default test/ folder in the project root.

| app/
| - component/
| -- index.js
| -- test/
| ---- index.js

Currently my package.json looks like this. I'm toying around with Istanbul, but I'm in no way tied to it. I have also tried using Blanket with similar levels of success.

{
  "scripts": {
    "test": "clear && mocha app/ app/**/test/*.js",
    "test-cov": "clear && istanbul cover npm test"
}

If I run my test-cov command as it is, I get the following error from Istanbul (which is not helpful):

No coverage information was collected, exit without writing coverage information

So my question would be this: Given my current application structure and environment, how can I correctly report on my code coverage using Istanbul (or another tool)?


TL;DR

How can I report on my code coverage using Node, Mocha, and my current application structure?

How to compare test results of a Gradle build

I'm currently using build artifact comparison tools / plugins available in Gradle like compare-gradle-builds (http://ift.tt/1J5XAXo in Gradle) or Artifact Diff Plugin (in Jenkins) or using some utility program(s) like pkgdiff (I would say this is one of the easiest way to see high level / detailed low level differences while doing artifact comparisons with nice HTML report output).

This helps in showing what's happened / changed since last build or in two given builds.

Similarly, I'm trying to find if there are any tools/utilities/plugins available online (in Gradle, Jenkins or other open source tool) which I can use to compare the test results (Unit / or non-unit tests like Integration / etc) of two given builds (i.e. it'll show me what exactly got added/removed/changed/failed/turned successful/time took to run the tests/configuration changes in the tests or in the test code etc with side by side comparison if possible?

I know SonarQube provides info about this to some extent (since previous analysis/in last X no. of days periods etc) but I'm wondering if there are specific tools/plugins/utilities available just for doing comparison on test results.

NOTE:
What if I'm running:
1. Tests in IE (Internet Explorer) in one matrix configuration slave/job and tests in FF(Firefox) in other. 2. What if there are multiple jobs (for running suite of tests). Is there any way to aggregate the results and compare.

Thanks.

What is the best way to test applications with video / audio streaming?

The app (which is an iPad app) allows multi-conferencing of up to 6 people. We use a 3rd party which allows us to have 1 stream up which it then replicates to the 5 other people in the meeting. We then receive the 5 streams from those other people down.

The app allows you to do "other things" whilst vid conferencing - let's say it has a text box you can type into. We're finding that some issues arise whilst doing "other things" on the iPad.

At the moment we're just manually testing which involves 6 people and requires a lot of overhead to setup. If we're to have a chance at fixing these issues we need to be able to test with just 1 person: Change a line of code - test it - repeat until fixed. There will likely be 100s of tests.

I have only thought of elaborate ways to allow testing such as having 5 iPads on a shelf (or similar) with their cameras facing a screen so we can see some movement from the test (6th) iPad. In order for the audio to work we'd need to plug in mics into the iPads and have them feed off to a speaker each with something that plays a sound one at a time (not everyone is allowed to talk at once).... this sounds ridiculous lol

What is the best way to test audio / video streaming without the requirement of multiple human beings?

Run seleniumtest using webdriver java for all browsers, without oppeing any browser

I have a general qustion about selenium:

Did there is a way to run selenium webdriverv2 using java, in different browsers(chrome\firefox\safari...) without opening any browser?

I read about selenium grid, unfortunate in grid it open the browsers/machines.

protractor e2e test for angular html5 drag and drop

As many of you know HTML5 drag and drop testing is not supported by protractor tests in Angularjs.

I got this drag-drop-helper.js on the net to simulate this drag and drop functionality. But when I tried to use it in my test spec by importing as node module

var dragdrop = require('./drag-drop-helper.js');

Getting error saying "jquery not found" How to solve this issue?

Ignore synchronization for $http, but not $timeout

Is there a way to ignore synching with the http request for a angular app?

I have this form that should be disabled during the POST of the form. When this POST request is pending, the form should be disabled and this is what I would like to write specs for.

So in Protractor I fill out the form and click on the send button. The request will never get a response, waiting until the browser ends the request due to a timeout (usually 30 secs), so I have time to check if the form is disabled. But since Protractor wants to sync with http requests, a pending request would render in a timeout from Protractor.

So I added the line browser.ignoreSynchronization = true, making Protractor ignore syncing http request. But I believe this also means ignoring syncing with angular as a whole, not waiting until angular have updated data-bindings for example before Protractor moves on.

Monkey talk to determine web service execution time in android application

I am using monkey talk to automate the testing of my application.

I want to know if we can write some script that determine the execution time of web service. If we can't do it in monkey talk, is there an other automated tool for android in which we can do this.

How run tests in different IOS simulators

Just wondering how I can run appium tests in different IOS simulators? Am keen to run my suite against iOS 7 and 8. And iPad and iPhone. But dont want to have to update my desiredCapabilities manually after each run through of the tests?

Kind regards,

Charlie

Appium - setting Desired Capabilities in both terminal and test code

I am trying to set some appium desired capabilities in the terminal window so that I can, for example, run my tests against different simulator devices:

Terminal: $ appium --device-name 'iPhone 6'

However, I am have to setup desired capabilities in my actual code, so I have a valid instance of IOSDriver. I use this code:

    capabilities.setCapability("platformName", "iOS");
    capabilities.setCapability("platformVersion", "8.3");
    capabilities.setCapability("app","../Build/Products/Debug-iphonesimulator/LightAlarm.app");      
        driver = new IOSDriver(new URL("http://ift.tt/1sv2im4"),capabilities);

When I run my tests I get an error that deviceName is not being set:

The following desired capabilities are required, but were not provided: deviceName

However, my terminal appium server is all setup correctly:

info: Welcome to Appium v1.4.0 (REV dc30dae9e8fe8c85eeea707dbdbd60350fdff55b)
info: Appium REST http interface listener started on 0.0.0.0:4723
info: [debug] Non-default server args: {"deviceName":"iPhone 6"}
info: Console LogLevel: debug

Any ideas what might be going wrong?

Thanks, Charlie

Selenium HTML Mock/Fixture to test JS controls

I'm using Cucumber (jvm) with Selenium and would like to test, not an entire page, but a control (that can be loaded on any page with JavaScript).

So I'd to do have something like:

webdriver.get("<html>[remotejsfiles][loadcontrol]</body></html>")

As you might figure, the js files location should be configurable. What I could do:

  • Create a static html file
  • override the location of the js file in thte html in a hook (ugly part)
  • use driver.get("file:/// [file] ")

But I'm pretty sure there is a better way...

how to write junit test cases for void function which prints output to console

I've two functions addKeywords and search which have void return type and search function prints results to the console. Here is the code

 void addKeywords(String video)throws IOException{


    InputStreamReader ip = new InputStreamReader(System.in);
    BufferedReader br = new BufferedReader(ip);

    Integer previousVal=0;

    if(!videoKeyword.containsKey(video) && videoNames.contains(video)){
        videoKeyword.put(video,new HashSet<String>());
    }
    else if(!videoNames.contains(video)){
        System.out.println("Video is not a part of lookup");
    }

    System.out.println("Enter keywords for video");
    String keyword =br.readLine();

    if(!keywordLength.containsKey(video))
        keywordLength.put(video, 0);

    if((keywordLength.get(video)+keyword.length())<500){
        videoKeyword.get(video).add(keyword);
        previousVal=keywordLength.get(video);
        keywordLength.put(video, previousVal+keyword.length());
    }
    else{
        System.out.println("Maximum length exceeded for video "+ video);
    }
    if(!kewordVideo.containsKey(keyword)){
        kewordVideo.put(keyword,new HashSet<String>());
    }
    kewordVideo.get(keyword).add(video);
 }

 void search(String searchKey){
    for (Entry<String, HashSet<String>> entry : videoKeyword.entrySet()) {
        for (String s : entry.getValue()) {
            if (s.startsWith(searchKey)) {
                System.out.println(searchKey+" is mapped to "+entry.getKey());
                break;
            }
        }
     }
 }

I have written junit tests

 public class MyUnitTest extends CultureMachineAssignment {
      CultureMachineAssignment testObj =new CultureMachineAssignment();
      testObj.insertDataIntoSet();
      testObj.addkeywords("video1");

     @Test
     public void testVideo() {
        assertEquals("video1", testObj.search("abcd"));
    }
}

I am getting following errors

The method assertEquals(Object, Object) in the type Assert is not applicable for the arguments (String, void)

  • Syntax error on token ""video1"", delete this token
  • Syntax error on token(s), misplaced construct(s)

I am not sure if this is the correct way to write junit test cases for functions with void return type and which prints output to console. Can someone please tell me correct code ?

mardi 26 mai 2015

Selenium when to use By id/name/class/spath/css, and page object

I start to work with selenium webdriver v2, i have a few qustions:

1- When to use By.id,By.name,By.className, By.cssSelector, By.xpath...

2- Did it good to combine all the By functions in the same test project?

3- When to use page object? did it recommendedto use for dynamic?

Gradle configuring TestNG and JUnit reports dirs

I am relatively new to gradle and we use both JUnit and testNG unit tests in our project. With google help I figured out how to make both testNG and JUnit tests running. Below is how I ended up by achieving it

build.gradle
....    
task testNG(type: Test) {
    useTestNG {}
}

test {
    dependsOn testNG
}

However only the JUnit reports are produced. Google again helped me with this link http://ift.tt/1eu0FVS which shows how to solve a problem that looks exactly like mine by configuring two separate test reports folders like below:

testng.testReportDir = file("$buildDir/reports/testng")
test.testReportDir = file("$buildDir/reports/testjunit")

However it does not exactly say where to put those two entries and I feel like I am going to get crazy trying to look at gradle books, gradle examples and API without figuring out how to. According with the API test task has a reports property where you can configure TestTaskReports instance but whatever I tried failed. Can you please help me with this. It must be something so obvious that I am missing it.

Thank you in advance

JUnit tests involving unloading classes in the JVM

I'm involved in a project where the application we are developing can be customized and additional features added to it through plugin-jars. When the application launches, it checks the plugins folder for any jars and attempts to load all the classes defined in that jar.

Data generated by the application is stored on disk after serializing through Kryo. Ideally, data generated by one user with one installation of the application should be accessible to another user with another installing of the application. Also ideally, the two users should be able to maintain independent configurations of their installation with independent sets of plugins.

The complication is if any of the data generated by User 1 depends on a particular class defined in one of the plugins that are present on User 1's installation of the application. If User 2 then tries to access that data and the class is not available in their installation, we need to fail gracefully without completely rendering the data inaccessible to User 2.

Our team has been able to design data structures that should, in principle, be able to fail gracefully and not render the data completely inaccessible. However, we'd like to implement some JUnit tests to verify their behavior. To this end, we were hoping there could be some way save generated data to disk, then unload the class definition, and finally attempt to reload the data from disk and verify certain assertions, all in a single JUnit test run.

Is there an elegant way to do this?

As an example, say the application we are developing is a CircusManagement application which generates and saves data about the properties and maintenance of various Troupes. The capabilities of a particular Venue depend on which plugin-jars are included in the installation of the application at that venue. At venue1 the venue manager updates troupe data and saves the data to disk. The troupe is now at venue2 and the venue manager is attempting to load the data from the last venue. However, venue2 doesn't have any OnSiteVetClinic and so can't load any of the data associated with that, but should be able to load all the other data. We need a JUnit test that will simulate the generation of data on disk from venue1 that includes the OnSiteVetClinic data type, and then the loading of that from venue2 that doesn't have that data type.

Friendly_id gem and its effects on tests (without slug)

I've implemented friendly_id gem by adding to the User model:

include FriendlyId
friendly_id :username

In the controller I've added replaced User.find(params[:id]) with:

User.friendly.find(params[:id])

So I'm not using Slug.

Problem: Now suddenly all sorts of tests fail. Just one example:

class AvatarUploadTest < ActionDispatch::IntegrationTest
  def setup
    @admin             = users(:michael)
  end

  test "avatar upload" do
    log_in_as("user", @admin)    # Helper method that logs in.
    get user_path(@admin)
    ...etc

The line get user_path(@admin) fails with the error message ActiveRecord::RecordNotFound: ActiveRecord::RecordNotFound. How does this relate to the friendly_id gem? I don't really understand it and don't know what adjustments I need to make to my tests.

Using Ant as a continuous testing tool

So after much hunting I failed to find a continuous testing tool for IntelliJ 14.

I stumbled across a post that references uses eclipse and Ant in order to simulate this. On save, Ant then runs the tests for any tests that were modified.

I've tried to replicate this but, alas! I've never used Ant before and am finding it extremely difficult. I've setup and configured a generic Ant build file in Intellij but simply cannot figure out how to achieve my task.

Any help, pointers in the right direction is very much appreciated. I've searched but only found information that needs to be decrypted first.

C# Testing Packet Loss Issues

I am looking at writing some integration tests that will ensure my messaging layers behaves correctly upon lost packets/network issues as I need to be sure there will be no duplicates when a process reconnects either end of the connection, ideally not keeping a list of received messages.

TCP is used a the Connection Protocol

Test Scenarios I want to test are:

  1. Sender Cannot connect (easy enough to test by connecting to non existent server)
  2. Server receives but something happens in between TCP and Application Layer on Client when receiving confirmation, after the TCP ACK. E.g. server thinks its ACK'ed, client never receives it. (is this a viable worry in any application?)

I am currently thinking of implementing a new process that will act as a proxy for TCP connection, and connect to it via WCF from the integration test to configure the forwarding initially and then switch it on/off, or decide which packets get forwarded and which don't via a callback in the test. Therefore can simulate a scenario like that.

  1. Is there something like that already in C#?
  2. Has someone tried this already and failed? Why?
  3. Is there a better approach (especially for message based testing)?

Thanks

Where to put external files for testthat tests

Suppose I have test like this:

require(testthat)
context("toy test")
test_that("toy", {
            df = my.read.file("test.txt", header=TRUE)
            expect_true(myfunc(df) == 3.14)
})

and this test relies on a external file test.txt, where should I put this file then?

SQA Testing: async callback javascript test

I am newbie to test automation. I came cross this question a couple of times in a different flavor. First one is: A method returns the values in random order to a list. Now I have to write a test to assert values in the list are correct. Second one is: There is javascript async call back method that returns chunks of data. How do I test completeness of this data is correct? I may need to write a test script to test this one. I am not sure if these both questions are similar in nature, but I like to know the answers or guidance to refer to. Appreciate help.

How to set MochaJS global timeout in browser

I'm trying to run some UI tests in a headless browser using MochaJS and I can't seem to get the timeout option to set correctly.

I've got the following running in my browser after I've loaded MochaJS:

window.mocha.setup({
    timeout: 10000
}).run();

The tests run, but I keep getting the following for one of my "slower" tests:

message: 'timeout of 2000ms exceeded. Ensure the done() callback is being called in this test.'

I've read the source for MochaJS and AFAIK, the .setup() that I've got above should set the global timeout to 10000ms, but it looks like it's still stuck at the default, 2000ms.

What am I doing wrong?

Best practise: one junit test should test exactly one method?

For example Lars Vogel specifies in his article: "A unit test targets a small unit of code, e.g., a method or a class, (local tests). External dependencies should be removed for unit tests, e.g., by replacing the dependency with a test implementation or a (mock) object created by a test framework. "

If I have a class MyClass that has methods a, b, c and d. Method a call method b and c. Method c call method d.

If I have a unit test for method a, testMethodA(), should I (or is it good practise) actually mock all the other method calls in this method a in order just to test method a?

If I would mock return value for other methods called by method a, then I would truly test only method a?

Is there any documentation or reference that show the best practise, what kind of units to test and what to mock?

Check ios deployment target of the static library

I have many static libs like libBlah.a With file tool I can check supported architectures. (arm64 or i386)

Is there tool to check iOS Deployment Target of the static lib?

enter image description here

Testing in a page context

I need some guidance on how to run js unit tests in web page context.

I have a page where graphs will be drawn using a 3rd party JS library. Also, there will be some filtering logic written by me - I want to test this piece.

The challenge I have is that I have to have a DOM present for this library to work, and I'm not quite sure how to run tests in this context.

It all looks something like this:

On a web page I have a div. Then in JS file I will say

var drawing = DrawStuffIn(getElementByid("my-div"));
drawing.FilterBy(something);
var filteredItems = drawing.GetFilteredItems();

At this point I want to make sure that filteredItems contain what I expect.

I was looking at using PhantomJS and Jasmine, but not quite sure how to fit it all together.

Getting "Error during sync. Timeout." error on android CTS (Compatibility Test Suite).

Problem: When I try to run my a complete test on CTS on the ubuntu command line "run cts --plan CTS", I get the error message "Error during sync. Timeout."

Additional Information:

  • I'm running Ubuntu 12.04.0 LTS on the VirtualBox.
  • the host Operating System is Windows 7 Professional.
  • The device I'm running CTS on is physically connected to a USB 3.0 Port.

Tried (and failed) Solutions:

Solution 1: I tried restarting both the host OS and Ubuntu after getting the "Error during sync. Timeout." message

Solution 2: I connected the device physically to a USB 2.0 Port

Solution 3: Used a different wire to connect my computer to the device.

Solution 4: Shutdown the device and turned it back on after getting the "Error during sync. Timeout." message.

Solution 5: I restarted the adb server by typing "adb kill-server" and then typing "adb devices"

Note that throughout all the solutions above, I made sure that the Guest OS in the VirtualBox can recognizes the plugged in USB device. I did this by:

  • Typing in "lsusb" in the command line
  • Typing "adb devices" in the command line.
  • Checked that "USB Debugging" option is checked under the Developer Options in the device.

Any answers is greatly appreciated! If there's any information I forgot to bring up, please let me know.

Testing a Rest Api which accepts a zip file as input

I have a rest Api like below.

@POST
@Path("/importFile")
@Consumes("application/zip")
@ApiOperation(value = "Importing File")
public List<String> importFile(InputStream is) throws IOException {
    ZipInputStream zipInputStream = new ZipInputStream(is);
    return importFile(zipInputStream);
}

How can I test it ?

Unable to select element by model

I need to select an element by model name but it's not working, the details are given below:

The element is:

textarea rows="8" ng-model="panel.information_text['comment']" >

Our Code:

input_ele = element(by.model("panel.information_text[\'comment\']"));
console.log(input_ele.getText());

Output:

{ ptor_:
    { controlFlow: [Function],
    schedule: [Function],
    setFileDetector: [Function],
    getSession: [Function],
    getCapabilities: [Function],
    quit: [Function],
    actions: [Function],
    touchActions: [Function],
    executeScript: [Function],
    executeAsyncScript: [Function],
    call: [Function],
    wait: [Function],
    sleep: [Function],
    getWindowHandle: [Function],
    getAllWindowHandles: [Function],
    getPageSource: [Function],
    close: [Function],
    getCurrentUrl: [Function],
    getTitle: [Function],
    findElementInternal_: [Function],
    findDomElement_: [Function],
    findElementsInternal_: [Function],
    takeScreenshot: [Function],
    manage: [Function],
    switchTo: [Function],
    driver:
    }
}

Any help would be appreciated.

python cross platform testing: mocking os.name

what is the correct way to mock os.name?

I am trying to unittest some cross-platform code that uses os.name to build platform-appropriate strings. I am running on a Windows machine but want to test code that can run on either posix or windows.

I've tried:

production_code.py

from os import name as os_name

def platform_string():
    if 'posix' == os_name:
      return 'posix-y path'
    elif 'nt' == os_name:
      return 'windows-y path'
    else:
      return 'unrecognized OS'

test_code.py

import production as production 
from nose.tools import patch, assert_true

class TestProduction(object):
    def test_platform_string_posix(self):
    """
    """
    with patch.object(os, 'name') as mock_osname:
        mock_osname = 'posix'
        result = production.platform_string()
    assert_true('posix-y path' == result)

this fails because os is not in the global scope for the test_code.py. If 'os' is imported in test_code.py then we will always get os.name=='nt'.

I've also tried:

def test_platform_string_posix(self):
    """
    """
    with patch('os.name', MagicMock(return_value="posix")):
        result = production.platform_string()
    assert_true('posix-y path' == result)

in the test, but this seems not to work because os.name is an attribute not a method with a return value.

S/W testing) How to make string test cases by using class partitioning?

I'm learning S/W testing and practicing to make test case. But I have no idea, when inputs are strings. I have a example. please give me some advise how to partition classes, and determine boundaries...

My program has to inputs that are 2 non empty Strings. And It determines which one has more vowels characters and prints the string.

For example, when 'mike' and 'hi' is entered, the program prints mike. If both strings have same number of vowels, the program prints equal.

I understand that the number of inputs can be a class. And I and I knows that 2 and 4 can be boundary invalid vales of the class. But That's all I know....

C# Update XML document within project

I'm wondering if its possible to update an XML document I have created for my tests which is stored within the main project folder and then copied to bin folder on each build. The project will be re-built before each test run.

The background to my query is, I'm looking to create a test where I can insert an xml document into a portal for uploading. The document has a unique identifier element within it which is the Id element, the document will only be accepted if the Id element is a new value which isn't already in the system.

What I'm looking to do on test initialize is to load the document using XmlDocument class and stripping it down to the Id element to store in a local int variable, then incrementing the int value by 1 to then pass back into the document and to update.

I can do this on one instance so the document stored in project and living in the bin folder has an Id = 1, then after first test run, the document stored in project stays with Id = 1 but the document living in the bin folder has it's Id incremented to Id = 2. This is all fine and well until the next time the project is re-built which would then have me in a state where documents in project/bin folder would revert to having an Id = 1.

Is there a way I can write to the xml document within the project?

Example of what I'm doing.

public override void Init()
{
     doc.Load("XmlDoc.xml");
     int Id = doc.ChildNodes.Item(0).ParentNode.FirstChild.InnerText;
     Id = Id + 1;
     doc.SelectSingleNode("//root/id").InnerText = Id;
     doc.Save("XmlDoc.xml");
}

Hopefully this makes some sort of sense :)

JUnit testing: simulating user input

I need to test a method that asks the user for an input and charges the player the inputted amount. The method to be tested:

public void askForBetSize() {
    System.out.println("\nYour stack: " + player.getBalance());
    System.out.print("Place your bet: ");
    bet = Integer.parseInt(keyboard.nextLine()); // = this needs to be simulated
    player.charge(bet);
}

Current unit test is:

@Test 
public void bettingChargesPlayerRight() {
    round.setCards();
    round.askForBetSize(); // here I would like to simulate a bet size of 100
    assertEquals(900, round.getPlayer().getBalance()); // default balance is 1000
}

I tried to implement this and this but after testing previous classes the test stopped running when it started to test this method.

Jhipster - JpaRepository "principal.username" query - Error in Test

I have an error while I am testing my rest controller on a specific method. I am using the @Query annotation to do my database query. And it's using the "principal.username" to do it. I don't have the all picture on how principal.username is fetched and used in my application. I am currently looking at the spring-security documentation about it. But my problem is in the test part, when I execute the test below, I have an error "Faillure" because of the @Query.

The repository:

public interface MeetingRepository extends JpaRepository<Meeting,Long {

  @Query("select m from Meeting m where m.visibility = 'PUBLIC' OR m.user.login = ?#{principal.username}")
  Page<Meeting> findOpenAndUserMeetings(Pageable pageable);

}

A Rest Controller Method:

@RequestMapping(value = "/api/meetings",
        method = RequestMethod.GET,
        produces = MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity<List<Meeting>> getAll(@RequestParam(value = "page" , required = false) Integer offset,
                              @RequestParam(value = "per_page", required = false) Integer limit)
    throws URISyntaxException {
    Page<Meeting> page = MeetingRepository.findOpenAndUserMeetings(PaginationUtil.generatePageRequest(offset, limit));
    HttpHeaders headers = PaginationUtil.generatePaginationHttpHeaders(page, "/api/meetings", offset, limit);
    return new ResponseEntity<List<Meeting>>(page.getContent(), headers, HttpStatus.OK);
}

A test:

@Test
@Transactional
public void getAllMeetings() throws Exception {
    // Initialize the database
    MeetingRepository.saveAndFlush(Meeting);

    // Get all the Meetinges
    restMeetingMockMvc.perform(get("/api/meetings"))
            .andExpect(status().isOk())
            .andExpect(content().contentType(MediaType.APPLICATION_JSON));
}

And this error:

getAllMeetings(com.vallois.valcrm.web.rest.MeetingResourceTest)  Time elapsed: 0.07 sec  <<< ERROR!
    org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.springframework.expression.spel.SpelEvaluationException: EL1008E:(pos 10): Property or field 'username' cannot be found on object of type 'java.lang.String' - maybe not public?
        at org.springframework.expression.spel.ast.PropertyOrFieldReference.readProperty(PropertyOrFieldReference.java:226)
        at org.springframework.expression.spel.ast.PropertyOrFieldReference.getValueInternal(PropertyOrFieldReference.java:93)
        at org.springframework.expression.spel.ast.PropertyOrFieldReference.access$000(PropertyOrFieldReference.java:46)
        at org.springframework.expression.spel.ast.PropertyOrFieldReference$AccessorLValue.getValue(PropertyOrFieldReference.java:372)
        at org.springframework.expression.spel.ast.CompoundExpression.getValueInternal(CompoundExpression.java:88)
        at org.springframework.expression.spel.ast.SpelNodeImpl.getTypedValue(SpelNodeImpl.java:131)
        at org.springframework.expression.spel.standard.SpelExpression.getValue(SpelExpression.java:299)
        at org.springframework.data.jpa.repository.query.SpelExpressionStringQueryParameterBinder.evaluateExpression(SpelExpressionStringQueryParameterBinder.java:131)
        at org.springframework.data.jpa.repository.query.SpelExpressionStringQueryParameterBinder.potentiallyBindExpressionParameters(SpelExpressionStringQueryParameterBinder.java:89)
        at org.springframework.data.jpa.repository.query.SpelExpressionStringQueryParameterBinder.bind(SpelExpressionStringQueryParameterBinder.java:69)
        at org.springframework.data.jpa.repository.query.AbstractStringBasedJpaQuery.doCreateCountQuery(AbstractStringBasedJpaQuery.java:109)
        at org.springframework.data.jpa.repository.query.AbstractJpaQuery.createCountQuery(AbstractJpaQuery.java:190)
        at org.springframework.data.jpa.repository.query.JpaQueryExecution$PagedExecution.doExecute(JpaQueryExecution.java:173)
        at org.springframework.data.jpa.repository.query.JpaQueryExecution.execute(JpaQueryExecution.java:74)
        at org.springframework.data.jpa.repository.query.AbstractJpaQuery.doExecute(AbstractJpaQuery.java:97)
        at org.springframework.data.jpa.repository.query.AbstractJpaQuery.execute(AbstractJpaQuery.java:88)
        at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.doInvoke(RepositoryFactorySupport.java:395)
        at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.invoke(RepositoryFactorySupport.java:373)
        at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
        at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:99)
        at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:281)
        at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96)
        at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
        at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:136)
        at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
        at org.springframework.data.jpa.repository.support.CrudMethodMetadataPostProcessor$CrudMethodMetadataPopulatingMethodIntercceptor.invoke(CrudMethodMetadataPostProcessor.java:122)
        at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
        at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
        at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
        at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207)
        at com.sun.proxy.$Proxy148.findOpenAndUserMeetings(Unknown Source)
        at com.vallois.valcrm.web.rest.MeetingResource.getAll(MeetingResource.java:77)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:221)
        at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:137)
        at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:110)
        at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:776)
        at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:705)
        at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85)
        at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:959)
        at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:893)
        at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:966)
        at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:857)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:618)
        at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:842)
        at org.springframework.test.web.servlet.TestDispatcherServlet.service(TestDispatcherServlet.java:65)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:725)
        at org.springframework.mock.web.MockFilterChain$ServletFilterProxy.doFilter(MockFilterChain.java:167)
        at org.springframework.mock.web.MockFilterChain.doFilter(MockFilterChain.java:134)
        at org.springframework.test.web.servlet.MockMvc.perform(MockMvc.java:144)
        at com.vallois.valcrm.web.rest.MeetingResourceTest.getAllMeetings(MeetingResourceTest.java:151)

A spell check Tool/ utility for desktop application

I am a QA by profession,i need a spell check utility for checking spellings of my desktop application at the time of black box testing(outside view);so that the tool or utility can highlight the misspelled words of the Application.

Recording script with LoadRunner



I'm trying to record a script with LoadRunner but nothing happens...
I'll try to be more specific: I create a new web-based script (Web - HTTP/HTML) because I want to record actions taken into IE.
I start doing things in IE and then stop the recording.
What I expect is to find into "Action" the code that describes what I've just done in IE but nothing appears: "Action"contains only the return.

Any idea about what could be the issue?!

How to retrive @Test method parameters in @DataProvider method?

I would like to retrieve parameters name of Test method in DataProvider method. By using method.getParameterTypes() in DataProvider, I am able to get the class of param being passed in Test method, but I want the names.

@Test
public void TC_001(String userName, String passWord){
//code goes here
}

@DataProvider
public Object[][] testData(Method method){
//Here I want to get names of param of test method i.e. userName and passWord
}

This is required because using these names I can get the data from my Excel file

How to test this async server response with Jasmine?

I want to write a Jasmine test for an async fetch call. This is how my test looks at the moment:

describe("async fetch call", function(){
        it ("should contact server and call callback success", function(){
            var server = sinon.fakeServer.create();
            server.autoRespond = true;
            server.respondWith([200, {'Content-Type': 'application/json'}, '{"collection":"coll","response":"resp", "options":"opts", "readyState":"1"}']);
            var callback = {
                success: jasmine.createSpy("success").and.callFake(function (){
                    expect(callback.success).toHaveBeenCalled();
                }),
                error: jasmine.createSpy("error")
            };
            session.getCompanies(callback);
            server.restore();
        });
    });

The function session.getCompanies is calling the fetch command. I use Jasmine 2.2.0 and SinonJS. When I run this test, expect is called too late. How can I make the test wait until expect was called?

Angularjs: Testing controller function, which changes $scope

I'm trying to test this controller function:

$scope.deleteAccount = function(account) {

  Account.deleteAccount(account._id)
  .then(function() {
    angular.forEach($scope.accounts, function(a, i) {
      if (a === account) {
        $scope.accounts.splice(i, 1);
      }
    });
  })
  .catch(function(err) {
    $scope.errors.other = err.message;
  });
};   

It is on a admin page. The function calls the factory (with promise) and the factory deletes the Account on the server. Then the function removes the element in the scope so that the deleted element isn't shown again.

My test looks like that:

beforeEach(inject(function ($controller, $rootScope, _$location_, _$httpBackend_) {

    $scope = $rootScope.$new();
    $location = _$location_;
    $httpBackend = _$httpBackend_;
    fakeResponse = '';

    AdminAccountCtrl = $controller('AdminAccountCtrl', {
      $scope: $scope
    });
    $location.path('/admin/account');
}));

it('test delete account', function () {
    expect($location.path()).toBe('/admin/account');

    $httpBackend.expectGET('/api/accounts').respond([{_id: 1}, {_id: 2}, {_id: 3}]);
    $httpBackend.when('GET', 'app/admin/admin.account.html').respond(fakeResponse);
    $httpBackend.when('DELETE', '/api/accounts/1').respond(fakeResponse);
    $httpBackend.flush();

    $scope.deleteAccount($scope.accounts[0]);
    expect($scope.accounts).toEqual([{_id: 2}, {_id: 3}]);
});

Sadly the result is:

Expected [ { _id : 1 }, { _id : 2 }, { _id : 3 } ] to equal [ { _id : 2 }, { _id : 3 } ].