lundi 31 août 2015

Using ng-describe for end-to-end testing with protractor

I've recently discovered an awesome ng-describe package that makes writing unit tests for AngularJS applications very transparent by abstracting away all of the boilerplate code you have to remember/look up and write in order to load, inject, mock or spy.

Has somebody tried to use ng-describe with protractor? Does it make sense and can we benefit from it?


One of the things that caught my eye is how easy you can mock the HTTP responses:

ngDescribe({
  inject: '$http', // for making test calls
  http: {
    get: {
      '/my/url': 42, // status 200, data 42
      '/my/other/url': [202, 42], // status 202, data 42,
      '/my/smart/url': function (method, url, data, headers) {
        return [500, 'something is wrong'];
      } // status 500, data "something is wrong"
    }, 
    post: {
      // same format as GET
    }
  },
  tests: function (deps) {
    it('responds', function (done) {
      deps.$http.get('/my/other/url')
        .then(function (response) {
          // response.status = 202
          // response.data = 42
          done();
        });
      http.flush();
    });
  }
});

Mocking HTTP responses usually helps to achieve a better e2e coverage and test how does UI reacts to specific situations and how does the error-handling work. This is something we are currently doing with protractor-http-mock, there are also other options which don't look as easy as it is with ng-describe.

How do I find a fixture's name, given an id in Rails tests?

Rails provides a method for looking up an ID given a fixture's name:

See [ActiveRecord::FixtureSet.identify](http://ift.tt/1Q51HVG

Suppose, in a failing test, I'd like to print out the name of a problematic fixture given it's ID. How do I do the reverse lookup?

loopback-testing: what is the correct way to test with a user and role

I am trying to write tests with loopback-testing.

I am a bit confused, and there's barely any documentation at all.

I'd like to test a model for which only a user with role "admin" has WRITE rights.

Now, if I do:

lt.describe.whenCalledByUserWithRole(test_config.adminUserCredentials, test_config.adminRole, ....)

loopback will actually create a user with test_config.adminUserCredentials prior to login! Why is it doing that?

The correct behavior, IMHO, should be:

  • Create the user in test setup
  • Create the role in test setup
  • Associate the role to the user
  • When running the test, only check that the user has rights on the requested operation

But it looks to be quite tricky to do with loopback-testing. If I create a user in setup, the test will crash because whenCalledByUserWithRole will in the process try to create the user again, which loopback will deny saying the user already exists. If I don't create a user and call whenCalledByUserWithRole, this user won't be associated to the "admin" role for some reason (even if the name suggests so), and the test fails.

How do I do this correctly?

Adding an assertion to ALL SoapUI test cases

We have a somewhat large project with a series of tests against endpoints. Due to the way the configuration for this API works, there's occasionally a chance that a field in any given response could be missing, replaced with the string "[invalid field]". Obviously when this happens, something is broken and we need to fix it, so I want to check for this string in all responses.

Is there a way to check all responses for this, or should I just put an assertion into each test manually?

Error Invalid gaurdfile

I'm very new to rails and attempting to execute a guardfile in a tutorial but it's throwing and error and I'm not entirely sure why but I'm sure its pretty simple. Thanks in advance!

Invalid Guardfile, original error is: 

[#] [#] undefined method `merge' for nil:NilClass,

Setting mock location in FusedLocationProviderApi

I'm trying to get mock updates from FusedLocationProviderApi, but I can't seem to make it work. This my set up method in android instrumentation test:

locationProvider = new LocationProvider(InstrumentationRegistry.getTargetContext().getApplicationContext(), settings);

// Connect first, so that we don't receive 'true' location
locationProvider.googleApiClient.blockingConnect();
// Set mock mode, to receive only mock locations
LocationServices.FusedLocationApi.setMockMode(locationProvider.googleApiClient, true);

InstrumentationRegistry.getInstrumentation().runOnMainSync(new Runnable() {
    @Override
    public void run() {
        // Start receiving locations; this call will connect api client, and if it's already connected (or after it connects) it will register for location updates
        locationProvider.start();
    }
});
// wait until we can request location updates
while (!locationProvider.isReceivingLocationUpdates) {
    Thread.sleep(10);
}

After this point I would expect any calls to LocationServices.fusedLocationApi.setMockLocation(apiClient, location) to set mock location that my listener would receive. Unfortunately this is not the case, and the listener stays silent.

My foolproof (or so I thought) method to set mock location looks like this:

private void setMockLocation(final Location location) throws Exception {
    assertTrue(locationProvider.googleApiClient.isConnected());
    assertTrue(locationProvider.isReceivingLocationUpdates);

    final CountDownLatch countDownLatch = new CountDownLatch(1);
    LocationServices.FusedLocationApi.setMockMode(locationProvider.googleApiClient, true)
            .setResultCallback(new ResultCallback<Status>() {
                @Override
                public void onResult(Status status) {
                    assertTrue(status.isSuccess());
                    LocationServices.FusedLocationApi.setMockLocation(locationProvider.googleApiClient, location)
                            .setResultCallback(new ResultCallback<Status>() {
                                @Override
                                public void onResult(Status status) {
                                    assertTrue(status.isSuccess());
                                    countDownLatch.countDown();
                                }
                            });
                }
            });
    assertTrue(countDownLatch.await(500, TimeUnit.MILLISECONDS));
}

Method returns successfully, but no location is received by the listener. I'm really at a loss here. The worst part is sometimes the test would pass, but extremely randomly (to a point when the same code executed several times would pass and fail in subsequent calls). My debug manifest, for completeness, has following permissions:

<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_MOCK_LOCATION" />

I have allowed mock locations in settings.

DOS batch password protection?

I ended stuck with a batch for password protection of MS-DOS command prompt. Neighter way works properly. If it does not display syntax error, it can't go over even when typing in correct password. Here is the code:

@echo off
echo Computer password protected.
echo.

:PSW
set /p "PASS" = "Enter password: "
IF "%PASS%"=="kerberos" (
    echo Welcome.
    echo.
    goto EOF
) ELSE (
    echo Wrong password! Try again.
    echo.
    goto PSW
)

:EOF

I kept there the password as it is just a test if it is working properly and how it actualy works...

How can I test a view with a form in Django?

I am creating an application in Django and I have the next problem: I want to write a test for a view in Django, but that view receives a form from the html, and I don't know how could I simulate the sending of a form to test that view.

How could I do it?

Xcode UI Testing - Asserting actual label values when using accessibility labels

The question is actually really simple:

Is there a way to assert the displayed value from a specific label (e.g. UILabel) when using an accessibility label on this object?

As far as I see it, all the assertions (e.g. XCTAssertEquals) made in the examples, be it from a WWDC Talk or Blogposts, are only checking if an element exists for a query like XCTAssertEquals(app.staticTexts["myValue"].exists, true) or if the number of cells in a table is correct XCTAssertEquals(app.tables.cells.count, 5). So, when avoiding accessibility labels it's possible to check if an object has a certain value displayed, but not which object / element. And when using accessibility labels, it robs me of the opportunity to query against the displayed values, because app.staticTexts["myValue"] will now fail to deliver a result but app.staticTexts["myAccessibilityLabel"] will hit.

Assuming I want to test my "Add new Cell to table" functionality, I can test that there is really a new cell added to the list, but I have no idea if the new cell is added on the top or the bottom of the list or somewhere in between.

Think what you will, but for me an easy way to check if a specific element has a certain value should be a no-brainer when it comes to UI Testing.

It is still possible that due to the missing documentation I might overlook the obvious. If so, just tell me.

com.sun.enterprise.config.ConfigException: Error refreshing ConfigContext

My controller testcases are passed but still console shows the following exception. As far I researched in google, I get to know that this exception might because of tags not closed in dispatcher servlet xml file. But all the tags in my Dispatcher servlet are closed correctly. Any Idea? Thanks in advance.

com.sun.enterprise.config.ConfigException: Error refreshing ConfigContext Caused by: java.io.FileNotFoundException: C:\workspace\projectname\config\system-server-config.xml (The system cannot find the file specified)

Jasmine: testing method called from window scope function

I'm using Jasmine to te test some of my code. It looks a bit like this

# main logic
function Analytics() {
  this.construct = function() {
  }

  this.foo = function() {
  }

  this.bar = function() {
  }
}

# "main" routine, called by jQuery on ready, or direct by Jasmine
function analytics() {
  new Analytics().construct();
}

# calls main routine
$(document).ready(function () {
  analytics();
});

When running this in the browser, it works fine. However, when I want to test my code with Jasmine (test if the constructor gets called when calling analytics() it fails.

Expected spy construct to have been called. (1)

This is what the spec looks like:

it('should call the constructor when the document is ready', function({
    var _analytics = new Analytics();
    spyOn(_analytics, 'construct')
    analytics();  # note this is the "main" routine
    expect(_analytics.construct).toHaveBeenCalled();
})

My testcase seems to be incorrect but I don't really see how. Does anyone have an explanation for this behavior?

Why do both unit and functional tests

I am doing extensive functional testing for my restful application, and because the app follows rest principles the overhead is minimal. I cannot think of a concrete reason why I should also invest time and effort into writing/testing unit tests. Am I missing an obvious point?

Symfony2 adding tests for saving api feed to database

I'm working on my first major symfony2 project.

I have updated an api that's no longer being maintained by the original author. http://ift.tt/1KWMhhU

The updated API contains all the necessary Unit and Integration Tests for the different API Calls. Including Mocks of the data feeds that come from the API.

I've now written a symfony2 bundle that uses this api via console commands and saves the data from the feeds to the database. FP_DataBundle

My question is about testing: Can i use the same mocks that are in my FantasyDataAPI library to test that the correct data is being saved to the database?

I'm thinking that i need the tests to execute the console commands and then fetch the data from the database and then go through the mocks and check that that the data in the DB Matches.

Can i create a database version that just holds the mock data and then test against that db? How can i do that?

Is my thinking askew and i need to do it another way, the feeds contain a lot of fields in json format, and duplicating all these in my bundle again seems like overkill.

On Load Testing in Visual studio 2013

I have one web service and Unit test project, Service is deployed at client site and i have code of unit test Project. Now, I have to perform web performance and load test on web service using Visual Studio 2013 Ultimate, How I can proceed to do that?

Writing Bash script to test C Program

I am quite new to bash scripting and I am thinking of using it to run few test cases for my C program. I have my results expected in a file called output.txt

I run my program:

./a.out > output_res_gen.txt

diff output_gen.txt output_res_gen.txt

If the program has run correct I get the following difference:

10001c10001
< Time: 0.291555
---
> Time: 0.111091

(That is time taken to execute the code, which can vary).

I wrote my bash script as follows:

#!/bin/bash
cd ..
./a.out > ../tests/output_res_gen.txt
diff ../tests/output_gen.txt ../tests/output_res_gen.txt

However my code does not execute (./a.out does not run). Also is there a way to find the difference between the 2 files and verify that the only difference is in between the time to execute?

dimanche 30 août 2015

RSpec less strict equality expectation for test

I have the following test:

it "can add an item" do
    item = Item.new("car", 10000.00)
    expect(@manager.add_item("car", 10000.00)).to eq(item)
end

Item's initialize looks like:

  def initialize(type, price)
    @type = type
    @price = price
    @is_sold = false
    @@items << self
  end

Manager's add item looks like:

  def add_item(type, price)
    Item.new(type, price)
  end

This test is currently failing because the two items have different object ids, although their attributes are identical. Item's initialize method takes a type, and a price. I only want to check for equality on those features... Is there a way to test strictly for attribute equality?

I have tried should be, should eq, to be, and eql? with no luck.

Testing Data Layer - Repository Pattern

I am working on a web api project in asp.net. I have decided to put the data access layer aside and focus on the business layer, but for that i have to find a way to mimic the data access layer.

I am working with repositories, my application is a bit more complicated than that but for now lets say that i have two domain objects, User and Ticket:

class User {
    public int Id { get; set; }
    public string Name { get; set; }

    public IList<Ticket> Tickets { get; set; }
}

class Ticket {
    public int Id { get; set; }
    public int Type { get; set; }
    public string Name { get; set; }
    public User Owner { get; set; }
}

I have the user test repository:

class UserTestRepository : IUserRepository {
    private TestDataContext context;

    public UserTestRepository (TestDataContext Context) {
        this.context = Context;
    }

    public User GetById (int Id) {
        return context.Users.Where(u => u.Id == Id).Single();
    }

    public void Add (User User) {
        context.AddUser(User);
    }
}

And also the ticket test repository which looks the same. The TestDataContext looks something like that:

class TestDataContext {
    public IList<User> Users { get; set; }
    public IList<Ticket> Ticets { get; set; }

    private int userCount;
    private int ticketCount;

    public TestDataContext () {
        userCount = 0;
        ticketCount = 0;

        Users = new List<User>();
        Tickets = new List<Ticket>();

        Seed(); // Not implemented yet...
    }

    public void AddUser (User User) {
        User.Id = userCount;
        userCount++;

        Users.Add(User);
    }

    public void UpdateUser (User User) {
        int index = Users.IndexOf(Users.Where(u => u.Id == User.Id).First());
        Users[index] = User;
    }

    public void DeleteUser (User User) {
        Users.Remove(Users.Where(u => u.Id == User.Id).First());
    }

    // Same three methods for Tickets...
}

Now lets say that i want to add a new Ticket to user with id 3, i would do this:

Ticket ticket = new Ticket () {
    Type = 1,
    Name = "My Ticket",
    Owner = new User () { Id = 3 }
};

TicketsRepository.Add(ticket);

The problem is that when i fetch the user with id 3, i would want to see this ticket that i inserted in the user's Tickets list.

I though of doing something like that:

// This is in TestDataContext

public void AddTicket (Ticket Ticket) {
    Ticket.Id = ticketCount;
    ticketCount++;

    Tickets.Add(Ticket);

    int userIndex = Users.IndexOf(Users.Where(u => u.Id == Ticket.User.Id).First());
    Users[userIndex].Tickets.Add(Ticket);
}

But the same i will have to do for updating and deleting, and for more complicated data relations in would be a really difficult task.

So my question is, am i doing it right? Is this how i supposed to test my data layer?

I don't want to just test data fetching, i want to test inserting and deleting because those i also very important functions of my application.

Thank you

How do I test this API endpoint with mongoose and node

I am writing a server API and I'm curious on how to test functions like this that rely on database responses.

router.post('/auth', koaBody, function*(next) {
  var person;
  if (!this.request.body.username || !this.request.body.password) {
    this["throw"]('Missing Username or Password in request', 401);
  } else {
    person = (yield People.findOne({
      username: this.request.body.username
    }).exec());
    if (!person) {
      this["throw"]('Incorrect Username/Password', 401);
    } else {
      if (((yield bcrypt.compare(this.request.body.password, person.password))) === true) {
        this.status = 200;
        this.body = jwtHelper.sign(_.omit(person, '_id', 'region', 'password', 'cell', 'lastLogin'), config.sessionSecret, {
          expiresInMinutes: 365 * 24 * 60
        });
      } else {
        this["throw"]('Incorrect Username/Password', 401);
      }
    }
  }
  (yield next);
});

How to test express config?

I've been trying to figure out how to test the configuration for an express server (ex. middleware, etc.). I haven't been able to find any examples, and I'm unsure if the best way to test it is to simply match the expected list of middleware to the actual list of middleware, do something else entirely, or even if it's just config that shouldn't be tested at all.

I should also add that I'm not as interested in how exactly to do it, but rather more in the higher level concept. But I'm using mocha and supertest if it helps.

How to tell which device I'm on in Xcode UI Testing?

While an Xcode UI Test is running, I want to know which device/environment is being used (e.g. iPad Air 2, iOS 9.0, Simulator).

How can I get this information?

Could someone explain the last case of the following code? What does MoveCursor do?

The program is for editing the system date and time by the user. The input set is (Alt-F4,Time,Date,Tab). Please explain how the Tab case works?
Code as follows-

Input = GetInput()
While (Input ­ Alt-F4) do
Case (Input = Time)
If ValidHour(Time.Hour) and ValidMin(Time.Minute) and
ValidSec(Time.Second) and ValidAP(Time.AmPm)
Then
UpdateSystemTime(Time)
Else
DisplayError(“Invalid Time.”)
Endif
Case (Input = Date)
If ValidDay(Date.Day) and ValidMnth(Date.Month) and
ValidYear(Date.Year)
Then
UpdateSystemDate(Date)
Else
DisplayError(“Invalid Date.”)
Endif
**Case (Input = Tab)
If TabLocation = 1
Then
MoveCursor(2)
TabLocation = 2
Else
MoveCursor(1)
TabLocation = 1
Endif**
Endcase
Input = GetInput()
Enddo

rails host in rspec feature test

I have a rails app with a feature spec that verify page url after user login expect(page.current_url).to eq(posts_url).

I configure Capybara host in spec/spec_helper.rb like this

config.before(:each) do
  Capybara.app_host = "http://mydomain.dev"
end

Test fail with note:

expected: "http://ift.tt/UqzmSA"
      got: "http://ift.tt/1F9dDPH"

I configure rails hosts with Rails.application.routes.default_url_options[:host] = 'mydomain.dev' but is seems it doesn't work properly (my tests still fails)

samedi 29 août 2015

salesforce Report SOQL Access in Test Context

The count is zero, although, there are so many reports in the org. When i ran the same SOQL in anonymous i get the actual count. Since, Report is a Metadata, i would expect it to available in Test Context. Can someone throw some light on this ?

Here is my sample code:

@isTest 
private static void aTestMethod(){
    Integer count = [Select count(id) from Report];
}

As a workaround, i had to keep SeeAllData=True to make it work.

JUnit-Tests for whole algorithm get in endless loop when testing mysql database

I am testing my project with JUnit. When i test the following for itself, everything is successfully visited (smoke test: Does everything run?).

Database (mySQL) also is cleared and recreated as wanted.

TestMethod:

@Test
public void testGetInstance() throws SQLException, InterruptedException {
    DatabaseConnection db = DatabaseConnection.getInstance();
    assertNotNull(db);

    //drops db so method is forced to recreate it.
    DatabaseDataSetter.dropDatabase();

    //test if relaunch on database is successful after recreation.
    db.connect();
    assertNotNull(db);
}

Method with endless loop:

/**
 * Clears complete database.
 * @throws SQLException 
 */
public static boolean dropDatabase() throws SQLException {

    PreparedStatement ps = null;

    try {
        ps = conn.prepareStatement("DROP DATABASE planning_data");
        ps.execute();
        ps.close();
        return true;
    } catch (SQLException e) {
        e.printStackTrace();
        return false;
    }   
}

So why is it working when tested alone and not, if done in whole project? I also want to test other methods which mySQL-contact, but i don't think this is relevant, when it stops at the first occurence.

Open source ETL Testing Automation tools

Is anyone using any Open Source tools to do ETL Testing / Validations ? We are mostly doing the ETL validations manually and need some tool to automate it . We use Informatica to build the mappings and Tidal to schedule them. It's a large Data Warehouse with Star and Snowflake Schemas.

how to execute automatic test on Asterisk?

I am a Software testing engineer.My job is to test the function of a VOIP server,based on asterisk.

Since my job is completely manual test,I just feel it boring,and would be replaced by a man who don't have a bachelor degree.

I want to know whether I can do automatical test on Asterisk.How to do it?

Thank a lot!

Recommend a database for test results and metrics

I'm looking for a 'database' solution (something I can host myself, especially Dockerised, or potentially a hosted/SaaS solution) to which I can publish 'generic' test results and, ideally, get metrics, charts, reports too.

In terms of schema, the info I'd be publishing would be: test name, user, Git branch, time stamp, result string or generic stack trace, and some way of versioning test cases/scripts, whether by number or test script checksum. A grouping mechanism would also help.

The tests scripts aren't tied to any particular language or well-known test framework.

Does something like this exist, or will I have to roll my own?

expected=JSONException.class doesn't catch

I made a Test where I send an invalid JSON, and it should catch me a JSONException.

When I run the test, it fails and shows me a JSONException, Why doesn't test catch it?

@Test (expected = JSONException.class)
public void testExtraFieldsJsonException() {
    String json = "wrong_data";
    fragment.decodeJson(json);
}

I use org.json.JSONException.

Test result.

org.json.JSONException: Value wrong_data of type java.lang.String cannot be converted to JSONObject
    at org.json.JSON.typeMismatch(JSON.java:111)
    at org.json.JSONObject.<init>(JSONObject.java:158)
    at org.json.JSONObject.<init>(JSONObject.java:171)
    at co.some.mainactivities.ProfileFragment.extraFields(ProfileFragment.java:154)
    at co.some.mainactivities.ProfileFragment.callExtraFields(ProfileFragment.java:243)
    at co.some.mainactivities.TestProfileFragment.testExtraFieldsJsonException(TestProfileFragment.java:41)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
    at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
    at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at org.junit.internal.runners.statements.ExpectException.evaluate(ExpectException.java:19)
    at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
    at org.robolectric.RobolectricTestRunner$2.evaluate(RobolectricTestRunner.java:250)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
    at org.robolectric.RobolectricTestRunner$1.evaluate(RobolectricTestRunner.java:177)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:363)

Prevent Arquillian from downloading apache-tomee before each test

I'm new to Arquillian and I'll have to fix some tests in old projects.

I noticed, that for some reason, each time when I run test, Arquillian downloads apache-tomee-1.7.2-webprofile (I switched to newest version 1.7.2).

Is there a way to prevent this behavior? (Maybe I have to add some maven dependency or startup script?) Because it takes forever to run all tests.

Here is my Arquillian config:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<arquillian xmlns="http://ift.tt/Rj5rcy"
        xmlns:xsi="http://ift.tt/ra1lAU"
        xsi:schemaLocation="
            http://ift.tt/Rj5rcy
            http://ift.tt/U5TLp1">

<container qualifier="tomee" default="true">
    <configuration>
        <property name="httpPort">-1</property>
        <property name="stopPort">-1</property>
        <property name="ajpPort">-1</property>
        <property name="simpleLog">true</property>
        <property name="cleanOnStartUp">true</property>
        <property name="dir">target/apache-tomee-remote</property>
        <property name="appWorkingDir">target/arquillian-test-working-dir</property>
    </configuration>
</container>

And also maven dependencies for tests (all versions are latest and taken from parent.pom):

    <dependency>
        <groupId>junit</groupId>
        <artifactId>junit</artifactId>
        <scope>test</scope>
    </dependency>

    <dependency>
        <groupId>org.jboss.arquillian.junit</groupId>
        <artifactId>arquillian-junit-container</artifactId>
        <scope>test</scope>
    </dependency>

    <dependency>
        <groupId>org.apache.openejb</groupId>
        <artifactId>arquillian-tomee-remote</artifactId>
        <scope>test</scope>
    </dependency>

    <dependency>
        <groupId>org.apache.openejb</groupId>
        <artifactId>ziplock</artifactId>
        <scope>test</scope>
    </dependency>

    <dependency>
        <groupId>org.jboss.shrinkwrap.resolver</groupId>
        <artifactId>shrinkwrap-resolver-impl-maven</artifactId>
        <scope>test</scope>
    </dependency>

    <dependency>
        <groupId>org.jboss.shrinkwrap</groupId>
        <artifactId>shrinkwrap-api</artifactId>
        <scope>test</scope>
    </dependency>

    <dependency>
        <groupId>org.jboss.shrinkwrap.resolver</groupId>
        <artifactId>shrinkwrap-resolver-api-maven</artifactId>
        <scope>test</scope>
    </dependency>

    <dependency>
        <groupId>org.jboss.arquillian.container</groupId>
        <artifactId>arquillian-container-test-api</artifactId>
        <scope>test</scope>
    </dependency>

    <dependency>
        <groupId>org.jboss.arquillian.junit</groupId>
        <artifactId>arquillian-junit-core</artifactId>
        <scope>test</scope>
    </dependency>

JavaFX: how to test easy JavaFX Apps with Junit and netbeans

I am looking for a simple possibility to test little JavaFX Apps with JUnit and Netbeans. I tried without success to test with JUnit, but it is said that it is not possible to handle the Javafx application thread...

Is there another easy possibility to test Javafx Tools with a another easy tool just without programming??

I hope I do not ask silly questions ....

Here is a demo FX Code that I want to test

public class MyClass extends Application {

@Override
public void start(Stage primaryStage) {
    Button btn = new Button();
    btn.setText("Say 'Hello World'");
    btn.setOnAction(new EventHandler<ActionEvent>() {

        @Override
        public void handle(ActionEvent event) {
            System.out.println("Hello World!");
        }
    });

    StackPane root = new StackPane();
    root.getChildren().add(btn);
    Scene scene = new Scene(root, 300, 250);
    primaryStage.setTitle("Hello World!");
    primaryStage.setScene(scene);
    primaryStage.show();
}

public static void main(String[] args) {
    launch(args);
}

}

Java. Gathering data for tests from production

Preparing of test data not an easy task, so I want to aggregate test data from running production server to do a component-level regression testing.

As I am in Java world I can use Java Instrumentation Api with Java Agents or more high-level AspectJ tools to log component method calls arguments and return values and then write tests using accumulated data.

But maybe there are easy ways or ready solutions? or some tools that automates writing tests based on data that was obtained this way?

vendredi 28 août 2015

finding datasets for test purpose

I am looking for datasets that serve as test samples for my work. I need a dataset of objects and each object is defined by a set of features. I dag in google but couldn't find useful links. can any one guide me to a source where I can find such datasets. To better illustrate: I need datasets conceptually looking like the following no matter what are the formats (like json,csv,excel ... etc)

Object-1 : feature1,feature2,feature3 ... ,featuren
Object-2 : feature1,feature2,feature3 ... ,featuren
Object-3 : feature1,feature2,feature3 ... ,featuren

Maven tomcat plug in hangs

I have a webapp Im trying to run integration tests on. It runs fine deployed normally into a web server. However, when doing integration tests with Maven and Tomcat plugin it hangs on this spot indefinitely.

Aug 28, 2015 12:14:52 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFO: Initiating Jersey application, version 'Jersey: 1.19 02/11/2015 05:39 AM'
Aug 28, 2015 12:14:52 PM org.apache.coyote.AbstractProtocol start
INFO: Starting ProtocolHandler ["http-bio-8080"]

I kind of forgot about and its been at this spot for about 2 hours now so Im sure its not a patience problem.

Here is the configuration for Tomcat and Failsafe

<build>
            <plugins>
                <plugin>
                    <groupId>org.codehaus.mojo</groupId>
                    <artifactId>failsafe-maven-plugin</artifactId>
                    <version>2.4.3-alpha-1</version>
                    <executions>
                        <execution>
                            <goals>
                                <goal>integration-test</goal>
                                <goal>verify</goal>
                            </goals>
                        </execution>
                    </executions>
                </plugin>
                <plugin>
                    <groupId>org.apache.tomcat.maven</groupId>
                    <artifactId>tomcat7-maven-plugin</artifactId>
                    <version>2.0</version>
                    <executions>
                        <execution>
                            <id>start-tomcat</id>
                            <phase>pre-integration-test</phase>
                            <goals>
                                <goal>run</goal>
                            </goals>
                        </execution>
                        <execution>
                            <id>stop-tomcat</id>
                            <phase>post-integration-test</phase>
                            <goals>
                                <goal>shutdown</goal>
                            </goals>
                        </execution>
                    </executions>
                </plugin>

Any ideas why this happens?

How run two Chrome driver for one profile with Selenium Webdriver Nodejs?

I write tests, and for the speed, i want, that user has already been authenticated (+ loaded data in local store).

import * as webdriver from 'selenium-webdriver';
import * as Chrome from 'selenium-webdriver/chrome';
var options = new Chrome.Options();

options.addArguments('--user-data-dir=C:\\profilepath');

var driver = new webdriver.Builder().withCapabilities(options.toCapabilities()).build();

driver.get("http://site.ru/").then(() => {
    console.log('Opened');
}, (err) => {
    console.log('Err', err);
});
var driver2 = new webdriver.Builder().withCapabilities(options.toCapabilities()).build();
driver2.get("http://site.ru/").then(() => {
    console.log('Opened');
}, (err) => {
    console.log('Error', err);
});

The first driver works good, opens the page, the second just hanging initial screen without any errors. Same for starts them in different processes ...

MSTest 2015/2013 Hanging

I am trying to get some unit tests to run on a build server and when MSTest is called from the command line it goes over the [Ignored] tests, and on the first test it sits and hangs with no output.

I attached a debugger and decompiled MSTest and it seems to get stuck in Microsoft.VisualStudio.TestTools.CommandLine RunCommand : Command

public override CommandResult Execute(TmiAdapter tmiAdapter)
{
  new ResultsEventListener(this, CommandFactory.Tmi).Initialize();
  Executor.Output.WriteInformation((string) Messages.CommandLine_StartExecution, MessageType.Status);
  CommandResult commandResult;
  try
  {
    commandResult = tmiAdapter.Run(this);
    switch (commandResult)
    {
      case CommandResult.TmiNoTestToRun:
        Executor.Output.WriteInformation((string) Messages.CommandLine_NoTestsToExecute, MessageType.Warning);
        break;
      case CommandResult.Success:
      case CommandResult.BrokenLinksFound:
        this.m_autoEvent.WaitOne();
        if (!TmiAdapter.RunPassed(this))
          commandResult = CommandResult.TmiRunFailed;
        string resultsFileForRun = TmiAdapter.GetResultsFileForRun(this.RunId);
        if (!string.IsNullOrEmpty(resultsFileForRun))
          Executor.ResultManagerInstance.AddResultsGeneralInfoPair((string) Messages.CommandLine_ResultsFileLogged, resultsFileForRun);
        Executor.ResultManagerInstance.AddResultsGeneralInfoPair((string) Messages.CommandLine_RunConfigurationUsed, tmiAdapter.RunConfig.Name);
        Executor.ResultManagerInstance.ShowResultsSummary(TmiAdapter.GetTestRunOutcome(this.RunId));
        Executor.ResultManagerInstance.ShowAllRunErrors();
        break;
    }
  }
  catch (CommandLineException ex)
  {
    this.Error = ex.Error;
    commandResult = ex.Result;
  }
  return commandResult;
}

On this line this.m_autoEvent.WaitOne();

Automation testing on mobile browser

I want to test our web based application on a mobile browser. I came across appium, selendroid, ios-driver. I have used selenium webdriver before. I also came across user agent and rippel add ons and other emulators. These are the questions I have :

  1. Can I use a browser on desktop and set its window size to match a mobile and run the selenium webdriver to test? WOuld it be sufficient?
  2. I have not used appium before and am a little hesitant. Is it easy to use it as compared to ios-driver and selendroid?
  3. Should I use ios and android emulators on a desktop and test away with selenium webdriver?

Which of the above three options is a better approach? Please share your thoughts. I am not sure if this matters but I will be using PHP codeception along with it.

Lauching Vbscript through HP UFT

I am automating a Test of a VbScript through HP UFT. I want to submit to the script several Variables and then capture the output of my Script.

So far i have started the script with a click from the user-interface. However this does not allow me to get information back from the test case.

I think i have several options: - starting it with cmd - starting it using parameters

But i have no clue as to how i get the output of my script back into my test case. Can someone enlighten me?

Are there a standard set of symbols for representing different classes of functions in documentation?

When describing a function in relation to its class (and I'm thinking in JavaScript although I reckon this could apply to most languages), I'm wondering if there are a standard set of symbols to denote the relationship of function to class.

For example it would make sense to me that a static or class function would be represented as

ClassName.functionName

since that's the way you would refer to it in code.

Other types of function would be instance methods and private function. Maybe there are more that I'm not considering (anonymous functions?).

When referring to these types of function in text (say, documentation or a test description) how do you represent the relationship between function and class? Currently I have different symbols that I use instead of the '.' but I'm wondering if there are standard symbols for this purpose.

File uploading error in codeception?

I'm trying to upload the file to the couple of input elements. The id attributes of these inputs are always generated randomly, so I receive a list of RemoteWebElements from my helper function:

function getInputFields() {
    $inputs = $this->getModule('WebDriver')->_findElements(['xpath' => "//input[@type='file']"]);
    return $inputs;
 }

Then, in the Cept I'm trying to upload the file, inserting the ID to XPath string

$pass = $I->getInputFields();
$path_to_input1 = "//*[@id='" . $pass[0]->getAttribute('id') ."']";
$I->attachFile($path_to_input1, '1.jpg');

I'm pretty sure that input element exists, and I get its ID correctly, checked by debug output. And getting this:

[ERROR - 2015-08-28T11:15:35.801Z] RouterReqHand - _handle.error - {"stack":"\tat _uploadFile ([native code])\n\tat \n\tat _postUploadFileCommand (:/ghostdriver/request_handlers/session_request_handler.js:212:30)\n\tat _handle (:/ghostdriver/request_handlers/session_request_handler.js:198:35)\n\tat _reroute (:/ghostdriver/request_handlers/request_handler.js:61:20)\n\tat _handle (:/ghostdriver/request_handlers/router_request_handler.js:78:46)","line":431,"sourceURL":""}

[Facebook\WebDriver\Exception\WebDriverException] JSON decoding of remote response failed.
Error code: 4
The response: 'Error - incompatible type of argument(s) in call to _uploadFile(); candidates were
_uploadFile(QString,QStringList)'

Can you please help me, where is the pitfall here?

How to customize Testopia to add additional fields in Product dashboard

Im trying to customize Testopia(Add-in to the Bugzilla) , to add a new tab in the product dashboard enter image description here

The new tabmay be something like Requirements next to the Environment tab .

This bugzilla is Open source and i have all the source code available .

So Pl let me know how to make changes to the source code to achieve this feature.

Thanks in advance .

Unable to create Test Plans in TFS web portal

I'm having problems creating and executing tests using TFS Test Manager and I'm hoping someone can help me sort them out.

I can create a Test Plan and associate it to an Iteration, and the add a new Test Case through the Test Plan associated to the same iteration. I've also tried creating a further test case through Parameters and associated it to the same iteration but not the Test Plan. Both the Test Plan Statuses have been set as ready. When I go back to the Test Plan, I have two problems as far as I can see:

  1. The Test Cases created do not list under the Test Plan in the the left hand panel.

  2. In the right hand panel, the menu bar with the buttons to execute tests displays for about half a second and is then replaced with an error message saying "No default test configuration is found". I suspect this is because of 1 above, but can't be sure.

I'm clearly configuring something incorrectly but cannot see what. Is anybody able to help me? I'll be greatly indebted to you if you can.

Multiple login tests on mobile app with UFT

I am trying to test the Login feature of my Android app with multiple user-password entries that I have in an Excel. I have already been able to import that data from the Excel successfully and run the same test with each row (with "Run on all Rows" option), but now I am facing a problem that I am not being able to solve.

After a test runs with one row, one the test starts over with a new row, it will not restart the app, but start at the same point where the previous one finished. I think this is not the expected behaviour, in general, since most of the GUI testing tools restart the app when testing a feature with parametrization (data from Excel, mostly). Anyway, I "fixed" this by logging out in my app.

In this case there was an "easy solution" by logging out. But what if I was testing a different feature in which I cannot simply "log out". The problem is that in those different cases I would have to navigate back or do something that may fail and has nothing to do with the feature I am testing.

I am not sure if I am not using the right approach. Is there a good general solution for this issue?

Populating request.DATA for testing api in Django using APIRequestFactory

I am using restframeworks test modules to write test. I need to test a get request for which I created a test as follows

    class BlogCommentTest(APISimpleTestCase):
        def test_3002_getting_reviewcomment(self):
             factory = APIRequestFactory()
             request = factory.get(reverse('get-review-comments'), data= {"article": 1})
             request.user = self.user
             view = ReviewCommentViewSet.as_view({'get': 'list'})
             force_authenticate(request, self.user)
             response = view(request)

however the way I wrote populates the request.GET with {'article':1} when I want to populate request.DATA. What is the correct way to make the request object whose request.DATA is populated the way I want ?

Should functions used only by tests be part of the public API?

A prime example of this is equality operators. If only your tests need to compare your objects for equality, should the equality operator be defined alongside the tests or should it be part of the classes' public interface?

This is assuming there is a reasonable, single definition of equality for this class, and comparing instances for equality is something that might be useful outside of tests in the future.

What are the advantages and disadvantages of making it private to the tests versus making it public?

How do I create a local device farm for automation testing with Android devices (smartphones)?

I want to create a Device farm Locally for Automation testing of my android app on multiple real devices alongside. Because the other services like AWS-Device farm and Xamarin are too costly and don't provide control over Testing framework servers.


Device Farm is an app testing service that enables you to test your Android, iOS, and Fire OS apps on real, physical phones and tablets. Basically I just need to connect multiple devices with my Appium testing framework and run my test scripts on them parallely.

Please Suggest solution, Thanks & Regards

"Cannot determine expansion folder" when running android Instrumentation tests

We want to set up instrumentation tests for our app, that also has 2 flavors. We have successfully set up Android Studio to run instrumented tests directly from the IDE, but trying to run instrumented tests from the command line via 'gradle connectedCheck' always results in the following error:

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':app:mergeDevelopmentDebugAndroidTestJavaResources'.
> Cannot determine expansion folder for /Users/james/Development/AndroidProjects/parkinsons11/app/build/intermediates/packagedJarsJavaResources/androidTest/development/debug/junit-4.12.jar936209038/LICENSE-junit.txt with folders 

Our test app, which also has two flavours and is set up for instrumented tests, runs both from the IDE and from command line without incident.

Here is our gradle file from our main project:

buildscript {
    repositories {
        maven { url 'http://ift.tt/1dRjIBX' }
    }

    dependencies {
        classpath 'com.crashlytics.tools.gradle:crashlytics-gradle:1.+'
    }
}
apply plugin: 'com.android.application'
apply plugin: 'crashlytics'

repositories {
    maven { url 'http://ift.tt/1dRjIBX' }
}

android {
    compileSdkVersion rootProject.ext.compileSdkVersion
    buildToolsVersion rootProject.ext.buildToolsVersion
    defaultConfig {
        applicationId "com.app.ourapp"
        minSdkVersion 16
        versionCode 11
        versionName "1.1"
    }

    buildTypes {
        release {
            minifyEnabled false
            proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
        }
    }
    sourceSets {
        main {
            assets.srcDirs = ['src/main/assets',
                              'src/main/assets/font']
        }
    }
    compileOptions {
        sourceCompatibility JavaVersion.VERSION_1_7
        targetCompatibility JavaVersion.VERSION_1_7
    }

    packagingOptions {
        exclude 'META-INF/LICENSE.txt'
        exclude 'META-INF/NOTICE.txt'
    }

    productFlavors {
        live {
            versionName "1.1 live"
            applicationId "com.app.ourapp.live"
        }
        development {
            versionName '1.1 development'
            applicationId "com.app.ourapp.development"
        }
    }
}

dependencies {
    compile fileTree(include: ['*.jar'], dir: 'libs')
    compile project(':library:datetimepicker')
    compile project(':library:tools')
    compile 'com.android.support:appcompat-v7:+'
    compile 'com.android.support:support-v4:+'
    compile 'com.crashlytics.android:crashlytics:1.+'
    compile 'org.quanqi:mpandroidchart:1.7.+'
    compile 'commons-io:commons-io:2.+'
    compile 'joda-time:joda-time:2.+'
    compile 'com.microsoft.azure.android:azure-storage-android:0.4.+'

    testCompile 'junit:junit:4.12'
    testCompile "org.robolectric:robolectric:${robolectricVersion}"
    testCompile "org.mockito:mockito-core:1.+"

    androidTestCompile 'junit:junit:4.12'
    androidTestCompile "org.mockito:mockito-core:1.+"
}

And here is our gradle.build from our test app (which works):

apply plugin: 'com.android.application'

android {
    compileSdkVersion 22
    buildToolsVersion "22.0.1"
    defaultConfig {
        applicationId "com.test.picroft.instrumentationtestapp"
        minSdkVersion 16
        targetSdkVersion 22
        versionCode 1
        versionName "1.0"
    }
    buildTypes {
        release {
            minifyEnabled false
            proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
        }
    }
    productFlavors {
        newFlavour {
            applicationId "com.test.picroft.instrumentationtestapp.newflavor"
        }

        oldFlavour {
            applicationId "com.test.picroft.instrumentationtestapp.oldflavor"
        }
    }
}

dependencies {
    compile fileTree(dir: 'libs', include: ['*.jar'])
    compile 'com.android.support:appcompat-v7:22.2.1'

    testCompile 'junit:junit:4.12'
    testCompile "org.mockito:mockito-core:1.+"

    androidTestCompile 'junit:junit:4.12'
    androidTestCompile "org.mockito:mockito-core:1.+"
}

I'm at a loss as to where I'm going wrong. I've compared the directory structure from both apps and there's no meaningful difference. Here's a rough outline of our main project's structure:

src
-androidTest
--java
---*
-live
--res
---layout
---values
-main
--java
---*
-test
--java
---*

I'm totally confused why instrumented tests on one app works fine both in IDE and in command line, while the other refuses to work via command line.

model_mommy breaks django-mptt

I'm using model_mommy to create instances of an MPTTModel in my tests, but it seems like it breaks the tree managed by mptt:

>>> parent = mommy.make(Category)
>>> child = mommy.make(Category, parent=parent)
>>> parent.get_descendants()
[]

The same without using model_mommy works properly:

>>> parent = Category(name=u'Parent')
>>> child = Category(name=u'Child', parent=parent)
>>> parent.get_descendants()
[<Category: Child>]

I suspect that the issue is that model_mommy provides random values for tree_id, lft, rght and level, which are mandatory fields, but should be handled by MPTT.

Is there a way to tell model mommy to not fill these fields at all ? Or is there a default value for these fields that would not break MPTT's save algorithm ?

Appium not able to scroll on iOS 8.4

Im trying to scroll successfully on Appium using the following code:

// java
JavascriptExecutor js = (JavascriptExecutor) driver;
HashMap<String, String> scrollObject = new HashMap<String, String>();
scrollObject.put("direction", "down");
scrollObject.put("element", ((RemoteWebElement) element).getId());
js.executeScript("mobile: scroll", scrollObject);

However, am getting a javascript error when trying to scroll beyond the bottom of the UITableView due to a known appium issue: http://ift.tt/1fK2M82

This issue alongside the fact appium's isDisplayed() method always returns true (whether or not the cell is visible on the screen) and appium is unable to click on a non-visible cell, means that appium is unable to scroll and select objects.

Has anyone found a way around this?

Angularjs test function declared in $promise.then block

I want to test function f1 my code looks like:

$rootScope.aaa.$promise.then(
  function (aaa){ 
    $scope.f1 = function(x){ 
      return aaa + x;
    }
  }
);

my tests looks like:

it('f1 should be defined', function(){
  expect(scope.f1).toBeDefined();
  expect(scope.f1(5)).toEqual(6);
};

What I should do to make karma wait for scope load?

Difference between Red Team, Penetration Testing and Blue Team

If a corporation includes as "internal entities" all of the following teams: 1) Red Team 2) Penetration Testing Team 3) Blue Team

What will be the differences between them? I find some difficulties in understanding the differences between Red and Pen Test!

And which team would have the wider scope and the higher authority?

jeudi 27 août 2015

Simple SOAP clients like SOA test tool for linux?

Can you please shape some Simple SOAP clients like SOA test tool for linux? SOA is not free. I need a free soaftware. I have used SOAPUI as well. But would be great if can find something like SOA

Javascript Testing framework similar to dojo DOH

Is there any framework in javascript similar to dojo DOH?

I am looking for testing framework which is browser-based like dojo DOH, it provide support for both browsers and JS runtime environment.

In simple words, I am looking for:

A simple browser-based graphical front end and runner file.

Why I am looking for other Testing unit instead of dojo DOH?

Dojo DOH come with complete package of DOJO, so if I develop an application in dojo then it good to use dojo DOH. Suppose I develop an application in other frameworks like Angular, Backbone etc.

Then only to test application I have to add complete dojo package with my application. (Please correct me if I am wrong, this assumption was made as per my experience on dojo)

Please guide me, if there is a possibility to test application using dojo DOH without adding complete dojo package.

Thanks Help!!

Simple application to view emails in the SMTP Pickup Directory

I am looking for a web application to view email files in a folder for use in a test environment. If required I will write the web application myself, but I was hoping someone has already written one.

The reason we need this is because our test environment deliberately does not send emails, but we still need to check our website is writing the correct emails to the SMTP Pickup Directory.

I have heard such web applications exist, and even spoken to people that have used them, but I cannot find one.

Requirements:

  • Must be web based.
  • Shows list of emails based on files in a folder.
  • Allows user to read individual emails
  • Allows user to delete individual emails.
  • Allows user to delete all emails.

Can anyone point me in the right direction to find an existing application which does this?

Supertest + Tape + Restify - Can't set headers twice error on consecutive calls

I'm building an API using Node.js and Restify. I am trying to do functional endpoint testing using Supertest and Tape. I have a test that makes two consecutive calls to the API and it is saying that I can't set the headers after they are sent.

UserController.js

/*
 * Create a user.
 */
exports.store = function(req, res, next) {
    // Get request input.
    var firstName = ParsesRequest.getValue(req, 'firstName'),
        lastName = ParsesRequest.getValue(req, 'lastName'),
        email = ParsesRequest.getValue(req, 'email'),
        password = ParsesRequest.getValue(req, 'password');

    // Create command.
    var command = new CreateUserCommand(firstName, lastName, email, password);

    // Execute command.
    CommandBus.execute(command, function(error, data) {
        var hasError = !!error && !data,
            status,
            message;

        status = (!hasError) ? StatusCodes.Created : StatusCodes.InternalServerError;
        message = (!hasError) ? UserTransformer.transform(data) : error;

        res.json(status, message);
    });
}

CreatesUserApiTest.js

var test            = require('tape');
var CreatesFakeUser = require('../../helpers/CreatesFakeUser'); 
var request         = require('supertest');
var config          = require('../../../config/Config').getConfiguration('test');
var url             = 'http://' + config.url + ':' + config.port;

test('Creates two users with same email and returns error', function(assert) {
    var user = CreatesFakeUser.generate();

    request(url)
        .post('/user')
        .set('Accept', 'application/json')
        .send(user)
        .expect(201)
        .end(function(err, res){
            assert.equals(res.status, 201, 'Returns 201 as status code.');
            assert.equals(!!res.body.id, true, 'Result body has an id field.');
        });

    request(url)
        .post('/user')
        .set('Accept', 'application/json')
        .send(user)
        .expect(500)
        .end(function(err, res){
            assert.equals(res.status, 500, 'Returns 500 because user already exists.');
            assert.end();
        });
});

Does anyone have any idea why the second request is failing? I am only sending res.json once in my method so it doesn't seem like it should set the headers twice. Also, I'm not sure if it has anything to do with setting the Connection on the header to close, but I tried and it doesn't help. Maybe it has something to do with async.

Any help would be appreciated!

Force maven to repackage artifact after integration test?

I want to re-package an artifact after integration test phase. Reason being there are modules like authentication and some others that cannot work in a dev machine for a number of reasons. A solution that comes to mind is add another custom phase similar to package and run it with a different profile. Is there a simpler, more straight forward way to do this?

Fault tolerance testing tools

I was reading about chaos monkey - http://ift.tt/Od7n1Y and was curious to know if there is any similar tool for testing the fault tolerance of a non-cloud based service. How do most organizations generally test the fault tolerance and recoverability of their systems?

How can JMS topics be tested (ActiveMQ + JMeter / HermesJMS / smth else)

I am a beginner in questions of working with JMS (including testing it) but at some point I need to start. I will try to sum up below what I could investigate regarding to this topic. And I hope you will advise me to choise right tools for my test purposes and correct my findings if it is necessary.

What do I have? There are 2 components in an enterprise system which communicate between each other via JMS topics. One of them is publisher which sends JMS messages (let it be Component A) and another one is subscriber which receives JMS messages (let it be Component B). The process of messages transmission is managed by JMS provider Apache ActiveMQ (version 5.10.X or above).

What do I want? I want to test the process of JMS messages transmission.

How do I want to perform testing? I have got the following ideas but I am not sure that all of them are possible. Please tell me which of them can be implemented: 1. Make my client as a publisher which could receive JMS messages from the real remote system (Component A). 2. Make my client as a subscriber which could send JMS messages (created by me) to the real remote system (Component B). 3. Make my client as a listener which could receive transmitted JMS messages from remote component A to component B. And inspect them.

Which tools am I planning to use? By this time I have found an information that it can be performed by the following tools: 1. HermesJMS (it seems that points 1 and 2 from the previous paragraph are can be executed, but I am not sure about 3rd). + integrated with SoapUI tool; - it seems there is a problem with ActiveMQ 5.9 and above compatibility (errors while creation of destinations). 2. Apache JMeter (initially it is a tool for perfomance testing, but I guess that it can be also used in the same way as HermesJMS). + admittedly compatibility with ActiveMQ (the same vendor); - tool for perfomance testing. 3. Could you please advise me smth else?

I will be glad to see your any remarks.

Best regards, Thomas

Junit - Should you create a db connection at beforeclass or before?

When doing integration tests it is often the case that you need to connect to a database and do some changes.

Should this be done in @BeforeClass or @Before in junit?

Advise needed on creating a Java Systems Integration Testing Framework

I got a 10 year old SIT Testing framework of my application.Its beautiful and I love using it and adding new test cases as we progress.I joined this project only recently.It is a XML based framework where we formulate test cases and call a java program.The java program uses DOM, parses the XML tags and invokes my application code with the inputs we configure in the XML. And the output once received is asserted,converted to XML and written to a output file. All is good.

So I got 5,10,15,20.... tests in 1 XMl file and have close to 120 XML files which covers around 85-90% of the code

Now - They want to explore alternatives. Not for any specific reason but because they have been using it for 10 years. They wanted to know if anything is out there in the market which will improve things. I got to recommend alternatives to them if any is better than the current framework. Even if its going to cost not a problem.

I researched a little bit and see here that google spock is an option

Please advise me on this experts

Integration testing tests the system extensively by mocking any input data objects etc etc. And we heavily do regression too.

We prefer to use Java

C#: test a method with an object parameter implementing a private interface

I have a first project with a method that returns a Model object instance implemented with a private class PrivateModel inheriting Model and a private interface IFoo.

Sample:

Project1:

public class Model {}
private interface IFoo {}
private class PrivateModel : Model, IFoo {}

// a sample class with the returning method
public class Bar
{
    public static Model CreateModelInstance()
    { return new PrivateModel(); }

    // code...
}

Project2:

// get model instance
var model = Bar.CreateModelInstance(); // return a Model

The second project calls a method "Act" with the model parameter, but Act's implementation tests if the model is a PrivateModel (with the IFoo implementation).

Project1:

public class Bar
{
    // code...

    public static bool Act(Model model)
    {
        // sample logic
        return model is IFoo;
    }
}

Now the question:

Because I have to test a method that performs calls to Act method (it's static), and I can't moke up it, I have to build an object that implements IFoo (that is private). Can I implement a class similar to TestClass: IFoo into the test project (a third project), or I have to use a Model returned from the Project1?

Programs available for generating ishihara plates

I've been attempting to make my own ishihara plate however when I rasterize to grayscale its clear my attempt did not work.

I'm wondering if anyone has any suggestions on programs available to "ishihara-ize" an image. My goals was to make up a checkerboard (of different colour combos i.e red and green) and when grayscale the image no longer held the square boundaries of the checkers on the board.

I appreciate any help thank you in advance!

Testing Angular route filter with Jasmine. Code works but cant make test pass

So my route filter is working as expected and I was writing some tests around it. I have several passing tests but for some reason I can't get this one test to pass. My route filter looks like this:

stripUs: ->
      resolve: ->
        resolution: ($location) ->
          urlParts = $location.$$path.split("/")
          if urlParts.indexOf('us') is 1
            $location.path(urlParts.slice(2,urlParts.length).join("/"))

The idea is to redirect /us/foo/bar urls to /foo/bar.

the tests I currently have passing for this filter are:

ddescribe 'stripUs', ->
    location = rootScope = null

    beforeEach inject ($location, $rootScope, initialDataService, ignoreHttpBackend) ->
      ignoreHttpBackend()
      location = $location
      rootScope = $rootScope

    it 'removes /us from /us/programming', ->
      location.path("/us/programming")
      rootScope.$digest()
      expect(location.path()).toEqual('/programming')

    it 'removes /us from /us/programming/**', ->
      location.path("/us/programming/sports")
      rootScope.$digest()
      expect(location.path()).toEqual('/programming/sports')

    it 'preserves route params', ->
      location.path("/us/programming/sports?affiliate=foo&&process=foobarred")
      rootScope.$digest()
      expect(location.path()).toEqual('/programming/sports?affiliate=foo&&process=foobarred')

The test I can't get to pass is:

it 'preserves route params', ->
      location.path("/us/programming?affiliate=foo")
      rootScope.$digest()
      expect(location.path()).toEqual('/programming?affiliate=foo')

the error message is:

Expected '/us/programming?affiliate=foo' to equal '/programming?affiliate=foo'

which would lead me to believe the code isn't working but it is if I actually try to visit the page. Additionally when I try to put a console log at the very top of the route filter, the log is never hit. I am new to testing in Jasmine and could use any help possible. Thanks in advance.

Example of how to write Django Test

I have to write some tests for some services I build that connect our backend to a mobile app another team member is building. I was asked to write some unit tests once I finished them. I am not familiar with Django testing so I wanted to ask if someone could give me an example of how you would test one of the services. That way I can then learn by example and do the rest on my own?

This is one example of a service I built that finds if there is a user by that email in our database and return a json object:

@csrf_exempt
def user_find(request):
    args = json.loads(request.body, object_hook=utils._datetime_decoder)
    providedEmail = args['providedEmail']
    try:
        user = User.objects.get(email=providedEmail)
        user_dict = {'exists': 'true', 'name': user.first_name, 'id': user.id}
        return HttpResponse(json.dumps(user_dict))
    except User.DoesNotExist:
        user_dict = {'exists': 'false'} 
        return HttpResponse(json.dumps(user_dict))

What would be the correct way to test something like this? I am guessing I have to mimic a request somehow that gives me an email and then have two tests where one matches and one doesn't match an existing user and make sure each returns the appropriate object. Is this the correct way of thinking about it? Can someone help me out a bit with the syntax?

is there a testing tool for Linux shell such as Selenium for the web?

I look for a tool in Linux for to make the test in easy form as selenium.

Because you can get a record file of any session with Selenium Firefox extension, and the method is only press the record button in the extension and make the actions in the web.

I tried with Cucumber, but the job mates (from support) hate to learn a new language for to make a testing cases...they prefer a boring spreadsheet to fill manually...

And I have to confess something, it's embarrassing...I thought or imagined that ttyrec "the magical tool" can record the shell session and later I can replay the actions with the record file...but it is like as "ascii film"...I feel a bit silly...sorry.

Then, I look for a tool more or less to "ttyrec" but it is as "selenium for to shell".

xpath does not get the web element which has multiple classes

I am very new at Java and Selenium so my apologies in advance if my question sounds a bit primary.

I am using Selenium and Java to write tests. But have issue with finding elements. I know some other ways to find this WebElement,

but why this:

WebElement we1 =driverChrome.findElement(By.xpath
("//div[contains(@class,'elfinder-cwd-filename ui-draggable') and @title='project.CPG']"));

can not get this:

<div class="elfinder-cwd-filename ui-draggable" title="project.CPG">project.CPG</div>

and shows this error:

Exception in thread "main" org.openqa.selenium.NoSuchElementException: no 
such element: Unable to locate element:{"method":"xpath","selector":"
//div[contains(@class,'elfinder-cwd-filename ui-draggable') and @title='project.CPG']"}

Iterating through list to select element - only able to click certain ones

Okay... This has been very frusturating because I have a method that always works for lists like this, and when it used for this particular list element, it is only able to click some of the elements!! ughh!! Heres the method I use

public void selectConvienanceComfortFeature(String feature){
        List<WebElement> choice = driver.findElements(By.xpath("//ul[@id='j_id_as-searchform-col-wrapper-col1-listingsSearch-featureOptions-checkboxes-j_id_hl-0-j_id_hm-j_id_ho-0-features']/li"));
        for(WebElement e : choice){
            System.out.println(e.getText());
            if(e.getText().contains(feature)){
                e.click();
                break;
            }
        }
    }

The System.out.println is only in the to see if it is actually finding the elements, which it is! I get a print out of every element in the list. But when I declare selectConvienanceComfortFeature("3rd Row Seats"), it wont click it! But it works for other list options such as "Heated Seats" that were printed to the console as well.. I know its there and I dont know why some of them work and some dont.. No, they are not by default selected.

Here is the HTML segment.

enter image description here

Maximum call stack size exceeded, after using: ((JavascriptExecutor)seleniumdriver). executeScript("return arguments[0].attributes);", webElement);

As you can see here I am using

((JavascriptExecutor)seleniumdriver).
executeScript("return arguments[0].attributes);", webElement);

to get all attributes from webElement, but it gives me this error:

Exception in thread "main" org.openqa.selenium.WebDriverException: unknown error: Maximum call stack size exceeded
  (Session info: chrome=43.0.2357.134)
  (Driver info: chromedriver=2.17.340124 (8cdfc496335a58cfb8bdd672c7dce0c23456384b),platform=Windows NT 6.1 SP1 x86_64) (WARNING: The server did not provide any stacktrace information)
Command duration or timeout: 4.25 seconds
Build info: version: '2.47.1', revision: '411b314', time: '2015-07-30 02:56:46'
System info: host: 'sina-PC', ip: '10.55.0.131', os.name: 'Windows 7', os.arch: 'x86', os.version: '6.1', java.version: '1.8.0_60'
Driver info: org.openqa.selenium.chrome.ChromeDriver

Run Managed Wildfly instead of Embedded WIldfly with Arquillian

It seems Embedded Wildfly does not support @Asynchronous method. I want to use not embedded version of Wildfly with arquillian. What should be change in the following configuration?

        <!--Arquillian JUnit integration: -->
        <dependency>
            <groupId>org.jboss.arquillian.junit</groupId>
            <artifactId>arquillian-junit-container</artifactId>
            <version>1.1.8.Final</version>
            <scope>test</scope>
        </dependency>
        <!--Container adapter for Wildfly START:-->
        <dependency>
            <groupId>org.wildfly</groupId>
            <artifactId>wildfly-arquillian-container-embedded</artifactId>
            <version>8.2.1.Final</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.wildfly</groupId>
            <artifactId>wildfly-embedded</artifactId>
            <version>8.2.1.Final</version>
            <scope>test</scope>
        </dependency>
        <!--Container adapter for Wildfly end -->
        <!--test scopes ends-->

    </dependencies>
    <build>
        <plugins>
            <plugin>
                <groupId>org.wildfly.plugins</groupId>
                <artifactId>wildfly-maven-plugin</artifactId>
                <version>1.1.0.Alpha1</version>
            </plugin>
            <!--You need the maven dependency plugin to download locally a zip with the server, unless you provide your own, it will download under the /target directory -->
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-dependency-plugin</artifactId>
                <version>2.8</version>
                <executions>
                    <execution>
                        <id>unpack</id>
                        <phase>process-test-classes</phase>
                        <goals>
                            <goal>unpack</goal>
                        </goals>
                        <configuration>
                            <artifactItems>
                                <artifactItem>
                                    <groupId>org.wildfly</groupId>
                                    <artifactId>wildfly-dist</artifactId>
                                    <version>8.2.1.Final</version>
                                    <type>zip</type>
                                    <overWrite>false</overWrite>
                                    <outputDirectory>target</outputDirectory>
                                </artifactItem>
                            </artifactItems>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>2.17</version>
                <configuration>
                    <!-- Fork every test because it will launch a separate AS instance -->
                    <forkCount>1</forkCount>
                    <reuseForks>false</reuseForks>
                    <argLine>-Djboss.http.port=8181</argLine>
                    <systemPropertyVariables>
<java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager>
                        <!-- the maven dependency plugin will have already downloaded the server on /target -->
                        <jboss.home>${project.basedir}/target/wildfly-8.2.1.Final</jboss.home>
                        <module.path>${project.basedir}/target/wildfly-8.2.1.Final/modules</module.path>
                    </systemPropertyVariables>
                    <redirectTestOutputToFile>false</redirectTestOutputToFile>
                </configuration>
            </plugin>
        </plugins>
    </build>

I tried:

<dependency>
    <groupId>org.wildfly</groupId>
    <artifactId>wildfly-arquillian-container-managed</artifactId>
    <version>8.2.1.Final</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.wildfly</groupId>
    <artifactId>wildfly-server</artifactId>
    <version>8.2.1.Final</version>
    <scope>test</scope>
</dependency>

But without success

Rspec & Rails create action testing Rspec

Hey guys I'm trying to test this create method in controller spec :

def create
    @user = User.new(user_params)
    if @user.save
      UserMailer.account_activation(@user).deliver_now
      flash[:info] = "Please check your email to activate your account."
      redirect_to root_url
    else
      render 'new'
    end
  end

I tried this but the user model doesn't seem to increment, in addition I would like to test mailer account activation as well:

  it "creates a new user" do
    expect{
      post :create, user: FactoryGirl.attributes_for(:user)
    }.to change(User,:count).by(1)
  end

Mark test as skipped from pytest_collection_modifyitems

how can i mark a test as skipped in pytest collection process.

What im trying to do is have pytest collect all tests and then using the pytest_collection_modifyitems hook mark a certain test as skipped according to a condition i get from a database.

I found a solution which i don't like, I was wondering if maybe there is a better way.

def pytest_collection_modifyitems(items, config):
    ... # get skip condition from database
    for item in items:
        if skip_condition == True:
            item._request.applymarker(pytest.mark.skipif(True, reason='Put any reason here'))

The problem with this solution is I'm accessing a protected member (_request) of the class..

Thanks.

Check multiple files exist in directory

How to find multiple files present in a directory in ksh (onAIX)

I am trying below one:

if [ $# -lt 1 ];then
    echo "Please enter the path"
    exit
fi
path=$1
if [ [ ! f $path/cc*.csv ] && [ ! f $path/cc*.rpt ] && [ ! f $path/*.xls ] ];then
    echo "All required files are not present\n"
fi

I am getting error like check[6]: !: unknown test operator //check is my file name.

what is wrong in my script. Could someone help me on this.

How can I import a file with common keywords in robot framework?

In a robot framework, I have a test suite like this:

test-suite/
  ├── Common.robot
  ├── TestCaseA.robot
  └── TestCaseB.robot

The file Common.robot defines some keywords which will used by both TestCaseA.robot and TestCaseB.robot. In other languages Common.robot would be called a library, but trying to import it like this

*** Settings ***
Library         Commons

or like that

*** Settings ***
Library         Commons.robot

results in an error.

[ ERROR ] Error in file '[...]/TestCaseA.robot': Importing test library 'Commons' failed: ImportError: No module named Commons

The keyword Library seems to work only for low level test libraries. I am sure there has to be another way. How can user-defined libraries be included in robot framework?

How to check test command in Unix

if [[ -s "${SCRIPT_PATH}/alert.log" ]]; then

if test `cat alert.log | wc -l` -gt 4 ;

then

if test -s `cat alert.log | grep -i Time`;

then

cat alert.log | tail -r | grep -i Time > latest_time.out

CURRENT_TIMESTAMP=`cat latest_time.out | head -1 | grep "Time:*" | cut -f2 -d":"`

echo $CURRENT_TIMESTAMP

rm alert.log


fi


fi

else

CURRENT_TIMESTAMP=$( date +%Y-%m-%d )

echo $CURRENT_TIMESTAMP

fi

If i execute this script and if alert.log is not having the word time, it should print the current timestamp, but its not working. And when I checked separately, i found cat alert.log | wc -l` -gt 4, this part is not working.Can any one help

Testing for Spring MVC framework

I am attempting the testing for a sample spring project : http://ift.tt/1jtHq9p When I try invoking the controller classes for testing purpose I get the below error:

Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException: 
Error creating bean with name 'albumController' defined in file [D:\Docs_Tutorials\spring-music\spring-music\target\classes\org\cloudfoundry\samples\music\web\controllers\AlbumController.class]: 
Unsatisfied dependency expressed through constructor argument with index 0 of type [org.cloudfoundry.samples.music.repositories.AlbumRepository]: 
: No qualifying bean of type [org.cloudfoundry.samples.music.repositories.AlbumRepository] found for dependency: 
expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {}; 
nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: 

Below is my JUnit test class :

@Category(IntegrationTest.class)

@RunWith(SpringJUnit4ClassRunner.class)

@WebAppConfiguration

@ContextConfiguration(classes = {WebMvcConfig.class,SpringApplicationContextInitializer.class})

public class EndPointsTesting {

@Autowired

private WebApplicationContext wac;  

@Autowired 

private AlbumRepository albumRepository;

private MockMvc mockMvc;

@Before
public void setup() {
    this.mockMvc = MockMvcBuilders.webAppContextSetup(this.wac).build();             
}


@Test
public void testEndPoints() throws Exception
{
        this.mockMvc.perform(get("/albums"))
        .andExpect(status().isOk())
        .andExpect( content().contentType(org.springframework.http.MediaType.APPLICATION_JSON))
        .andExpect(content().string("[{\"_class\": \"org.cloudfoundry.samples.music.domain.Album\",\"artist\": \"Test123\",\"title\": \"Test Title\",\"releaseYear\": \"2015\",\"genre\": \"Rock\"    },{\"_class\": \"org.cloudfoundry.samples.music.domain.Album\",\"artist\": \"Test456\",\"title\": \"Test Title\",\"releaseYear\": \"2015\",\"genre\": \"Blues\"   }]"));
}

}

Is there something wrong in the way I am attempting the integration test.Can someone help.

mercredi 26 août 2015

How to test a print method in Java using Junit

I have written a method that is printing output to a console. How should I test it?

public void print(List<Item> items) {
        for (Item item: items){
            System.out.println("Name: " + item.getName());
            System.out.println("Number: " + item.getNumber());

        }
    }

currently, my test looks like this

@Test
    public void printTest() throws Exception {
            (am figuring out what to put here)

    }

I have read the solution at this post (thanks @Codebender and @KDM for highlighting this) but don't quite understand it. How does the solution there test the print(List items) method? Hence, asking it afresh here.

Expresso Test unable to import AndroidJUnit4 and Expresso Test

I have followed the tutorial given in

http://ift.tt/1MP7vCJ

my code is

/* JUnit4 & Espresso */
androidTestCompile 'com.android.support.test:runner:0.3'

// Set this dependency to use JUnit 4 rules
androidTestCompile 'com.android.support.test:rules:0.3'

// Set this dependency to build and run Espresso tests
androidTestCompile 'com.android.support.test.espresso:espresso-core:2.2'

androidTestCompile 'com.android.support.test.uiautomator:uiautomator-v18:2.1.1'

But I still cant import AndroidJUnit4 class and Expresso Test.

import static android.support.test.espresso.Espresso.onView;

But after until android.support.test.

the expresso library is in red colors

and also the AndroidJUnit4 unable to run.

Anyone know what issue?Thanks a lot

Revert a Docker container back to its original image without restarting it?

Normally, people are all about making Docker persist data in their containers and there are about twenty million questions on how to do exactly that, but I'm a tester and I want to dump all that crap I just did to my data and revert back to my known state (aka my image).

I'm aware I can do this by spinning up a new container based on my image but this forces me to disconnect and reconnect any network connections to my container and that's a huge pain.

Is it possible to revert a running container back to its original image without restarting it?

How to directly find WebElements by their attributes except "class" and "name" (for example "title")

I am very new at Java and Selenium so my apologies in advance if my question sounds a bit primary.

I use:

driverChrome.findElements(By.className("blabla"));

to find elements which have "blabla" as their className, for example:

<span class="blabla" title="the title">...</span>

Now, what if I want to find all elements by their other attributes? something like:

driverChrome.findElements(By.titleValue("the title"));

This is the code that I am currently using to do this task:

List<WebElement> spans = driverChrome.findElements(By.tagName("span"));

for (WebElement we : spans) {

    if (we.getAttribute("title") != null) {
            if (we.getAttribute("title").equals("the title")) {
                    ...
                    break;
            }
    }

}

but it is not fast and easy to use.

How should I test this module?

I'm writing a node module that collects data from a web page using cheerio. The module calls back with the collected data as an object, like this:

module.exports = function collect(callback) {
  var data = {
    item1: '',
    item2: '',
    item3: ''
  };

  callback(null, data);
}

I'm trying to decide how to test this. I'm inclined to create an expected object and compare it to what I actually received. I plan on using mocha and assert, so this test would look something like this:

describe('collect()', function() {

  it('should give me the expected data', function(done) {
    var expected = {
      item1: '',
      item2: '',
      item3: ''
    };

    collect(function(err, actual) {
      assert.deepEqual(actual, expected);
      done();
    });
  });

});

My main gripe with this solution is that I cannot individually test each piece of data. For example, item1 and item2 might equal what was expected, but item3 does not, therefore failing the whole test. Is there a better solution here?

DOM Testing Jade

I have a project that uses Node, Express, and Jade, and I want to test it. I have been able to test the server using Mocha, but now I need to test the client side code. I have a number of scripts that reference the DOM that I want to test, and it might also be worthwhile to test things in the DOM, such as click events. I haven't been able to find any documentation on how to client-side test a Node project or Jade files. What resources could I use to test these things?

How to get HTML code of a WebElement in Selenium

I am new at testing so my apologies in advance if my question sounds a bit primary.

I am using Selenium and Java to write a test.

I know that webElement.getAttribute("innerHTML"); brings me the innerHTML, for example for the element below:

<a href="#" class="ui-dialog-titlebar-close ui-corner-all" role="button" style="position: absolute; border-radius: 0px 0px 4px 4px;"><span class="ui-icon ui-icon-closethick">close</span></a>

it returns:

<span class="ui-icon ui-icon-closethick">close</span>

but I need something that brings me the inner attribute of WebElement "a", something like below:

href="#" class="ui-dialog-titlebar-close ui-corner-all" role="button" style="position: absolute; border-radius: 0px 0px 4px 4px;"

Multiple APKs on single device

I am being asked by my QA team to give them different APKs pointing to different servers and they can install all of them on the same device so that they can compare it side by side.

I know it's impossible to have multiple APKs on the device without changing the package name. But then since the app uses services like GCM, they depend on package name and then we have to start doing changes on server to support debug builds.

I'm just generally curious to know how people around usually test the app, specially in the case above where you can have multiple servers that you may want to test on. Are there any specialised tools that you guys use? What's the best practice?

Can Cucumber hooks differentiate tags

Ruby 2. I have an API realised as a module. It defines a class as simply as

class Foo
  include API
end

but it's also used to extend another module via

module Bar
  class << self
    include API
  end
end

I want to test the behaviour both of an instance of Foo (e.g., Foo.new.methname) but also the module functions (e.g., Bar.methname).

One of my Before hooks defines @exemplar as the default object against tests should be applied. So, the question: how can my Before hook tell whether it should use @exemplar = Bar or @exemplar = Foo.new ?

Alternatively, after playing with tags a bit, let me try a different approach. If a scenario is tagged with @a and @b, and I have Around('@a') and Around('@b') hooks, and cucumber is invoked with -t @a, both hooks get invoked. Is there a way the hook code can tell

  1. What the Around('...') argument is (i.e., the value of the '...'), and
  2. What tags are actually in the set being applied?

I.e., is there any way the Around('@b') hook can tell that it's for the @b tag expression, and that @b is not in the list of tags being applied?

Thanks!

How to extend Entity from AbstractAuditingEntity on Jhipster generated app?

I've generated an entity with the command yo jhipster:entity MyEntity

and the following options

{
    "relationships": [],
    "fields": [
        {
            "fieldId": 1,
            "fieldName": "title",
            "fieldType": "String"
        }
    ],
    "changelogDate": "20150826154353",
    "dto": "no",
    "pagination": "no"
}

I've added the auditable columns on liquibase changelog file

<changeSet id="20150826154353" author="jhipster">
    <createSequence sequenceName="SEQ_MYENTITY" startValue="1000" incrementBy="1"/>
    <createTable tableName="MYENTITY">
        <column name="id" type="bigint" autoIncrement="${autoIncrement}" defaultValueComputed="SEQ_MYENTITY.NEXTVAL">
            <constraints primaryKey="true" nullable="false"/>
        </column>
        <column name="title" type="varchar(255)"/>

        <!--auditable columns-->
        <column name="created_by" type="varchar(50)">
            <constraints nullable="false"/>
        </column>
        <column name="created_date" type="timestamp" defaultValueDate="${now}">
            <constraints nullable="false"/>
        </column>
        <column name="last_modified_by" type="varchar(50)"/>
        <column name="last_modified_date" type="timestamp"/>
    </createTable>

</changeSet>

and modify the MyEntity class to extend AbstractAuditingEntity

@Entity
@Table(name = "MYENTITY")
@Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE)
public class MyEntity extends AbstractAuditingEntity implements Serializable {

then run mvn test and got the folowing exception

[DEBUG] com.example.web.rest.MyEntityResource - REST request to update MyEntity : MyEntity{id=2, title='UPDATED_TEXT'}

javax.validation.ConstraintViolationException: Validation failed for classes [com.example.domain.MyEntity] during update time for groups [javax.validation.groups.Default, ]
List of constraint violations:[
    ConstraintViolationImpl{interpolatedMessage='may not be null', propertyPath=createdBy, rootBeanClass=class com.example.domain.MyEntity, messageTemplate='{javax.validation.constraints.NotNull.message}'}
]
    at org.hibernate.cfg.beanvalidation.BeanValidationEventListener.validate(BeanValidationEventListener.java:160)
    at org.hibernate.cfg.beanvalidation.BeanValidationEventListener.onPreUpdate(BeanValidationEventListener.java:103)
    at org.hibernate.action.internal.EntityUpdateAction.preUpdate(EntityUpdateAction.java:257)
    at org.hibernate.action.internal.EntityUpdateAction.execute(EntityUpdateAction.java:134)
    at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:463)
    at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:349)
    at org.hibernate.event.internal.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:350)
    at org.hibernate.event.internal.DefaultAutoFlushEventListener.onAutoFlush(DefaultAutoFlushEventListener.java:67)
    at org.hibernate.internal.SessionImpl.autoFlushIfRequired(SessionImpl.java:1191)
    at org.hibernate.internal.SessionImpl.list(SessionImpl.java:1257)
    at org.hibernate.internal.QueryImpl.list(QueryImpl.java:103)
    at org.hibernate.jpa.internal.QueryImpl.list(QueryImpl.java:573)
    at org.hibernate.jpa.internal.QueryImpl.getResultList(QueryImpl.java:449)
    at org.hibernate.jpa.criteria.compile.CriteriaQueryTypeQueryAdapter.getResultList(CriteriaQueryTypeQueryAdapter.java:67)
    at org.springframework.data.jpa.repository.support.SimpleJpaRepository.findAll(SimpleJpaRepository.java:318)

this is the test that's failing

@Test
    @Transactional
    public void updateMyEntity() throws Exception {
        // Initialize the database
        myEntityRepository.saveAndFlush(myEntity);

        int databaseSizeBeforeUpdate = myEntityRepository.findAll().size();

        // Update the myEntity
        myEntity.setTitle(UPDATED_TITLE);


        restMyEntityMockMvc.perform(put("/api/myEntitys")
                .contentType(TestUtil.APPLICATION_JSON_UTF8)
                .content(TestUtil.convertObjectToJsonBytes(myEntity)))
                .andExpect(status().isOk());

        // Validate the MyEntity in the database
        List<MyEntity> myEntitys = myEntityRepository.findAll();
        assertThat(myEntitys).hasSize(databaseSizeBeforeUpdate);
        MyEntity testMyEntity = myEntitys.get(myEntitys.size() - 1);
        assertThat(testMyEntity.getTitle()).isEqualTo(UPDATED_TITLE);
    }

the line that's throwing the exception is this

List<MyEntity> myEntitys = myEntityRepository.findAll();

I've noticed the TestUtil.convertObjectToJsonBytes(myEntity) method is returning the JSON object representation without the auditable properties -which is expected because of @JsonIgnore annotations- but I suppose the mockMVC.perform update operation isn't honoring the updatable = false attribute set on createdBy field

@CreatedBy
@NotNull
@Column(name = "created_by", nullable = false, length = 50, updatable = false)
@JsonIgnore
private String createdBy;

how may I make an Entity auditable and have the tests passed?

How to generate multiple-criterias to find a WebElement

I am new at testing so my apologies in advance if my question sounds a bit primary.

I use Selenium and Java to write atest, I need to find and element by more than one criteria.

I saw this question on stackover which is exactly what I mean, but the answers do not work for me as I do not know what is //input[(@id='id_Start') and (@class = 'blabla')] and how to generate it.

Javascipt Testing of a form using AJAX/jquery calls using Jasmine

Any idea how I can write a jasmine testing code for the following code snippet.

 ()->
  $("form.new_client").on "ajax:success", (event, data, status, xhr) ->
    $("form.new_client")[0].reset()
    $('#client_modal').modal('hide')
    $('#error_explanation').hide()

  $("form.new_client").on "ajax:error", (event, xhr, status, error) ->
    errors = jQuery.parseJSON(xhr.responseText)
    errorcount = errors.length
    $('#error_explanation').empty()
    if errorcount > 1
      $('#error_explanation').append('<div class="alert alert-error">The form has ' + errorcount + ' errors.</div>')
    else
      $('#error_explanation').append('<div class="alert alert-error">The form has 1 error.</div>')
    $('#error_explanation').append('<ul>')
    for e in errors
      $('#error_explanation').append('<li>' + e + '</li>')
    $('#error_explanation').append('</ul>')
    $('#error_explanation').show()

I could really use some help. thanks

Distributed unit testing framework for .NET

I need to perform distributed unit testing. Test is consists of some actions on one cluster node, then some actions on other cluster node, then I check result. I don't want manually write just for this tests client-server architecture. Is there something ready?

Testing angular controller with many dependencies

I've started testing my Angular app and have question that bother me a lot. For example I have controller (mainController) which is injecting 2 services: authService, configService.

Before testing I should prepare something like that:

describe('controller: testController with testService mock', function() {      
    var controller, authService, configService;

    beforeEach(module('app'));

    beforeEach(inject(function($controller, _authService_, _configService_) {         
        authService = _authService_;
        configService = _configService_;

        controller = $controller('mainController');
    }));    

    it('should be registered with all dependencies', function() {
        expect(controller).to.be.defined;

        expect(authService).to.be.defined;
        expect(configService).to.be.defined;
    });

}

And that's totally clear for me. What if one of services or both have their own dependencies (services) ? Of course I'm gonna add it by passing through the inject function. In small apps that's no big problem. I'm adding as much services as I need. But the question is what if that services are injecting other services and others injecting others and there is a huge hierarchy ? What if we must add 30 services and we can't make a mistake because otherwise it's not gonna work ?

To be honest I've search a lot but there are many testing examples and tutorials but every single one is based on totally basic apps with few controllers and services.

Is there a painless way to handle this ? Maybe there is a way to skip some dependencies or force to automatically inject services with it's dependencies ?