vendredi 31 juillet 2015

Testing a loop that repeats itself every 10 seconds

I have a code that repeats itself every 10 seconds, but I can't test it for a long time because my powershell keeps on hanging and the code just stops for no particular reason (the code is running but it doesn't give out results). Is there a way to test the code, or safely running it without it being interrupted? I tried to search but it seems that a library like Unittest will just crash along with my code due to windowshell if I want to run it for lets say a day. Because it usually hangs just a few hours after I start testing manually.

The code is something like this:

import time
import requests
while True:
    getting = requests.get(some_url)
    result = getting. json()
    posting = requests.post(another_url,headers,json=result)
    time.sleep(10)

Thank you for your help.

phpunit with dbunit: how can i keep data in my db across tests?

i have a phpunit question regarding dbunit and how to keep data created in the database by one test for use in the next. i'm new to phpunit (we've been using an in-house tester for years but are finally trying to get with the modern age), so i apologize if this is a trivial issue.

the desired effect i have a mysql table that contains a column that is a unique key. if an attempt is made to insert a duplicate of this column, special things happen that i would like to be able to test. i have written a test to insert a value into this column (and test its success) and then written another test immediately afterwards to test how the class fails on attempting a duplicate value. i'd like to be able to catch that exception and test it. i am using dbunit to pre-fill my db with all the pre-filly stuff i need.

the problem at the commencement of each test it appears as if getDataSet() is called and, as a result, the unique key data i insert in the first test is no longer there to test against. consequently, i can't test the anticipated failure of inserting duplicate unique keys.

what i'm looking for well, obviously some way to persist the database data across tests; avoid calling getDataSet(), perhaps, at the beginning of the second test.

i certainly hope this is possible. i can't imagine why it wouldn't be; it seems like people should want to test duplicate insert! i am willing to entertain other solutions if they accomplish the task.

thanks in advance!

here's my test, stripped down to the relevant bits.

<?php
class UserPOSTTest extends \PHPUnit_Extensions_Database_TestCase
{

    static private $pdo = null;
    private $conn = null;

    /**
     * @return PHPUnit_Extensions_Database_DB_IDatabaseConnection
     */
    public function getConnection()
    {
        if($this->conn === null) {
            if (self::$pdo == null) {
                self::$pdo = new \PDO('mysql:host=localhost;dbname=thedatabase', 'user', '*********');
            }
            $this->conn = $this->createDefaultDBConnection(self::$pdo, "db");
        }
        return $this->conn;
    }

    /**
     * @return PHPUnit_Extensions_Database_DataSet_IDataSet
     */
    public function getDataSet()
    {
        // this is returned at the beginning of every test
        return $this->createFlatXmlDataSet(dirname(__FILE__) . '/some_data_set.xml');
    }

    /**
     * test the insertion of the value "unique key value" into a column set as UNIQUE KEY in mysql
     * since getDataSet() has cleared this table, it passes.
     */
    public function uniqueKeyTest_passes() 
    {
        $inserter = new Inserter("unique key value");

        $this->assertEquals($inserter->one,1); // just some bogus assertion 

    } // uniqueKeyTest_passes

    /**
     * run the exact same insert as in uniqueKeyTest_passes() above. the purpose of this test is to
     * confirm how the Inserter class fails on the attempt to insert duplicate data into a UNIQUE KEY column.
     * however, the data inserted in uniqueKeyTest_passes() has been scrubbed out by getDataSet()
     * this is the crux of my question
     */
    public function uniqueKeyTest_should_fail() 
    {
        try {
            // exact same insert as above, should fail as duplicate
            $inserter = new Inserter("unique key value");
        }
        catch(Exception $e) {
            // if an exception is thrown, that's a pass
            return;
        }

        // the insert succeeds when it should not
        $this->fail("there should be an exception for attempting insert of unique key value here");

    } // uniqueKeyTest_should_fail 

}

Django: tests.py as a module

Background:

I'm using Django 1.8.

And I'm beginning to test on it.

When I use the models.py or views.py, I usually remove them, and create a module folder with the same name to replace.

In this way, I can split the models and views to different code files and make them easy to edit.


Question:

But when I tried to change the tests.py to a module folder, I found that the test cases in the __init__.py cannot run.

What's wrong? If I want to do so, is there any way?

Please help, thank you.

RSpec - ActiveRecord::RecordNotFound when testing templates

I'm trying to test that my templates are rendering but can't seem to figure out why its not registering the ID of my stubbed instance. What do I have to do to get a correct path?

describe RestaurantsController do
  let(:restaurant) { FactoryGirl.build_stubbed(:restaurant) }

  describe "GET #show" do
    before { get :show, id: restaurant.id }
    it { should render_template('show') }
  end
end

Error:

1) RestaurantsController GET #show 
     Failure/Error: before { get :show, id: restaurant.id }
     ActiveRecord::RecordNotFound:
       Couldn't find Restaurant

JS newbie confuse about common test procedure

All:

I have never done JS testing before, all I did is writting code, run it, if there is bug, figure out, if not, consider it as done.

For small projects, this is ok, cos I can figure out where is wrong quickly, but when I came to a team with medium size project, I realize I should learn how to do JS testing.

So my question is:

[1] Is there a common guideline about the procedures of testing(no need to cover everything, just from experienced engineer daily routine)? Like what to test and how. (some examples with explanation will be appreciated)

[2] I find most posts talking about using Jasmine/Mocha/Grunt/Karma to do test, but without understanding the content and plan, I do not quite understand why should I use them and how. So could anyone give me any example about their usage scene?

BTW, I know this is very newbie question, and probably someone will label this as too broad topic, if so, could you please just talk what you think from any specific point with small example. I will collect all the answers and make summary myself( the most important thing I need to know is what "specific" tests I need to do(Like: a test flow). As for how to do, that is second priority ).

Thanks

symfony2 phpunit : how to drop and copy database before running tests?

In my symfony2 application, I am using phpunit to verify that the response from every route has a code 200.

Before I run the tests, I want to drop the test database and copy my production database under the name test (i.e. I want to reinitiate my test database).

I learned about the public static function setUpBeforeClass() but I am lost as to how drop and copy the database.

How can I do that ?

My class so far :

<?php

namespace AppBundle\Tests\Controller;

use AppBundle\FoodMeUpParameters;
use Symfony\Bundle\FrameworkBundle\Test\WebTestCase;

require_once __DIR__.'/../../../../app/AppKernel.php';


class ApplicationAvailabilityFunctionalTest extends WebTestCase
{

    public static function setUpBeforeClass()
    {

    }

    /**
     * @dataProvider routeProvider
     * @param $route
     */
    public function testAllRoutesAreLoaded($route)
    {
        $listedRoutes = $this->getListedRoutes();

        $this->assertArrayHasKey($route, $listedRoutes);
    }
}

RSpec - Error when testing nested index route

I have my photo actions nested under restaurant actions as so:

resources :restaurants do
    resources :photos
end

However when I run my test:

describe PhotosController do
  it { should route(:get, "/restaurants/:restaurant_id/photos").to(action: :index) }
end

I get the error:

1) PhotosController should route GET /restaurants/:restaurant_id/photos to/from {:action=>"index", :controller=>"photos"}
     Failure/Error: it { should route(:get, "/restaurants/:restaurant_id/photos").to(action: :index) }
       The recognized options <{"controller"=>"photos", "action"=>"index", "restaurant_id"=>":restaurant_id"}> did not match <{"action"=>"index", "controller"=>"photos"}>, difference:.
       --- expected
       +++ actual
       @@ -1 +1 @@
       -{"action"=>"index", "controller"=>"photos"}
       +{"controller"=>"photos", "action"=>"index", "restaurant_id"=>":restaurant_id"}
     # ./spec/controllers/photos_controller_spec.rb:4:in `block (2 levels) in <top (required)>'

I checked my route paths via rake routes and got the following:

restaurant_photos  GET /restaurants/:restaurant_id/photos(.:format)  photos#index

What am I doing wrong?

Unit testing Collection in Java

I have a unit test that will allow me to iterate through a Collection object containing a list of vehicle. Upon each iteration, I want to check to see if the vehicle is an instance of automobile. So my code looks a bit like this:

public class VehicleChecker {
    protected boolean checkVehicles(Garage garage) {
        for (Vehicle vehicle : garage.getVehicles() {
            if (vehicle instanceof Automobile) return true;
        }
    }
}

So I wrote my code accordingly:

@Mock private Garage mockGarage;
@Mock private VehicleCollection mockVehicleCollection;
@Mock private VehicleCollectionIterator mockVehicleCollectionIterator;
@Mock private Vehicle mockVehicle;

@Test
public void testCheckVehicles() {

    VehicleChecker testObject = new vehicleChecker();

    when(mockGarage.getVehicles()).thenReturn(mockVehicleCollection);
    when(mockVehicleCollection.iterator()).thenReturn(mockVehicleCollectionIterator);
    when(mockVehicleCollectionIterator.hasNext()).thenReturn(true).thenReturn(false);
    when(mockVehicleCollectionIterator.next()).thenReturn(mockVehicle);

    boolean result = testObject.checkVehicles(mockGarage);

    verify(mockGarage).getVehicles();
    assertEquals(true, result);
}

The problem occurs with the verify statement. Based on how it was written, the test should pass. When I step through the code, however, I the code just skips the for loop entirely. Why is that? Is there a difference in the way one iterates through a Collection as opposed to an ArrayList? If so, how do I properly mock that interaction?

Espresso startActivity that depends on Intent

i have the following situation.

My activity have a fragment that depends of a Serializable Object, my onCreate:

@Override
protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);

    MyObject myObj = (MyObj) getIntent().getSerializableExtra("myobj");

    if(myObj != null) {
        FragmentManager manager = getSupportFragmentManager();
        FragmentTransaction transaction = manager.beginTransaction();
        transaction.add(R.id.container, MyFragment.newInstance(myObj));
        transaction.commit();
    }
}

But in my Espresso test i simply cant pass the intent to the actvity before it's created, i tried with setActivityIntent in several ways but cant figure out how to make it work.

Here is my last attempt:

import android.content.Intent;
import android.support.test.InstrumentationRegistry;
import android.support.test.espresso.Espresso;
import android.test.ActivityInstrumentationTestCase2;
import org.junit.Before;

import static android.support.test.espresso.assertion.ViewAssertions.matches;
import static android.support.test.espresso.matcher.ViewMatchers.withId;
import static android.support.test.espresso.matcher.ViewMatchers.withText;

public class MyActivityTest extends

     ActivityInstrumentationTestCase2<MyActivity> {

        private MyActivity activity;
        private MyObject myObj;

        public MyActivityTest() {
            super(MyActivity.class);
        }

        @Before
        protected void setUp() throws Exception {
            super.setUp();
            injectInstrumentation(InstrumentationRegistry.getInstrumentation());
            myObj = MyObject.mockObject();
            Intent i = new Intent();
            i.putExtra("myobj", myObj);
            setActivityIntent(i);

        }

        public void testName(){
            Espresso.onView(withId(R.id.name)).check(matches(withText(myObj.getObjName())));
        }

    }

I searched a lot but nothing works, MyObject is always null in the test, i think this should be simple what im missing?

Cucumber JVM second scenario or feature

I use the cucumber JVM for mobile test otomation and I have one question

When I use one feature file or one scenario in the a feature file, my codes works but if I use two feature file or two scenario in the a feature file ıt gives java.lang.NULLPointerException issue

issue

    java.lang.NullPointerException
    at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:210)
    at org.openqa.selenium.support.ui.FluentWait.<init>(FluentWait.java:94)
    at org.openqa.selenium.support.ui.WebDriverWait.<init>(WebDriverWait.java:70)
    at org.openqa.selenium.support.ui.WebDriverWait.<init>(WebDriverWait.java:44)
    at com.cucumber.OtoTest.Steps.yeni_adres_sayfasını_ac(Steps.java:269)
    at ✽.When yeni adres sayfasını ac(1features.feature:22)

java.lang.NullPointerException
    at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:210)
    at org.openqa.selenium.support.ui.FluentWait.<init>(FluentWait.java:94)
    at org.openqa.selenium.support.ui.WebDriverWait.<init>(WebDriverWait.java:70)
    at org.openqa.selenium.support.ui.WebDriverWait.<init>(WebDriverWait.java:44)
    at com.cucumber.OtoTest.Steps.yeni_adres_sayfasını_ac(Steps.java:269)
    at ✽.When yeni adres sayfasını ac(2dos.feature:6)
    enter code here


Feature one @üyelik Feature: New User

Scenario: I am a new user

Given I open the app
When I select the city
When I uye ol button
When I field the E-posta
When I field the Sifre
When I field the Sifre repeat
When I field the Ad
When I field the Soyad
When I field the D.Tarihi
When I field the Semt
When I check the E-posta informationcheckbox
When I check the Sms information checkbox
Then I click to save button
And I click tamam button
Given Adres ekleme butonuna tıkla


Feature two @adres Feature: adres ekleme

Scenario: login oldum adres ekliyorum

When yeni adres sayfasını ac
When yeni adres sayfasında geri butonuna bas
When I field the E-posta
When Yeni adres ekleme butonuna tekrar bas
When yeni adres sayfasını acx2
Then Adres formunu doldurs
And Kaydet butonuna bas

Please help me

Mocha async test handle errors

I'm trying to create a test case with Mocha but my code is asynchronous.

That's fine, I can add a "done" callback function to "it" and that will work perfectly fine for positive cases. But when trying to test negative cases, it will just make the test fail.

I would like to make something like this but asynchronous:

someObject.someMethod(null).should.equal(false)

Instead I can only test for a callback to return, instead of testing what really happend (null is not valid):

it('this should return false or an error', function(done) {
    someObject.someMethod(null, '', done);
});

I would like to write something like this:

it('this should return false or an error', function(done) {
    someObject.someMethod(null, '', done).should.throw();
});

but that would lead to this error:

"TypeError: Cannot read property 'should' of undefined"

I also tried using expect and assert, but the same rules apply.

Any clues? Thanks

maven junit test with spring:Failed to load ApplicationContext

thanks in advance,it bothered me two days already,still dont know the solution!

run junit test in eclipse by right click on the test class,and choose run as junit test is totally successful. but when run use mvn test under commandline,console log show the message:Failed to load ApplicationContext.

below is the full message:

2015-07-31 17:44:58,103 ERROR [org.springframework.test.context.TestContextManager] - Caught exception while allowing TestExecutionListener [org.springframework.test.context.support.DependencyInjectionTestExecutionListener@1d24fcc4] to prepare test instance [com.ieasy360.sop.hq.api.dao.BaseDepartmentDAOTest@7d3a3e7] 
java.lang.IllegalStateException: Failed to load ApplicationContext
    at org.springframework.test.context.TestContext.getApplicationContext(TestContext.java:157)
    at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.injectDependencies(DependencyInjectionTestExecutionListener.java:109)
    at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.prepareTestInstance(DependencyInjectionTestExecutionListener.java:75)
    at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:321)
    at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.createTest(SpringJUnit4ClassRunner.java:211)
    at org.springframework.test.context.junit4.SpringJUnit4ClassRunner$1.runReflectiveCall(SpringJUnit4ClassRunner.java:288)
    at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.methodBlock(SpringJUnit4ClassRunner.java:290)
    at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:231)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
    at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61)
    at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
    at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:71)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
    at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:174)
    at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:53)
    at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:119)
    at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:101)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:601)
    at org.apache.maven.surefire.booter.ProviderFactory$ClassLoaderProxy.invoke(ProviderFactory.java:103)
    at $Proxy0.invoke(Unknown Source)
    at org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:150)
    at org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcess(SurefireStarter.java:91)
    at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:69)
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'agentProfileController': Injection of autowired dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire field: private com.ieasy360.sop.hq.api.service.AgentProfileService com.ieasy360.sop.hq.api.controller.AgentProfileController.agentProfileService; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'agentProfileServiceImpl' defined in file [F:\ci\ieasy360-sop\sop-hq-api\target\classes\com\ieasy360\sop\hq\api\service\impl\AgentProfileServiceImpl.class]: Initialization of bean failed; nested exception is java.lang.NoClassDefFoundError: Could not initialize class $Proxy31
    at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:287)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1106)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:517)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
    at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294)
    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225)
    at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291)
    at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193)
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:585)
    at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:913)
    at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:464)
    at org.springframework.test.context.support.AbstractGenericContextLoader.loadContext(AbstractGenericContextLoader.java:103)
    at org.springframework.test.context.support.AbstractGenericContextLoader.loadContext(AbstractGenericContextLoader.java:1)
    at org.springframework.test.context.support.DelegatingSmartContextLoader.loadContext(DelegatingSmartContextLoader.java:228)
    at org.springframework.test.context.TestContext.loadApplicationContext(TestContext.java:124)
    at org.springframework.test.context.TestContext.getApplicationContext(TestContext.java:148)
    ... 32 more
Caused by: org.springframework.beans.factory.BeanCreationException: Could not autowire field: private com.ieasy360.sop.hq.api.service.AgentProfileService com.ieasy360.sop.hq.api.controller.AgentProfileController.agentProfileService; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'agentProfileServiceImpl' defined in file [F:\ci\ieasy360-sop\sop-hq-api\target\classes\com\ieasy360\sop\hq\api\service\impl\AgentProfileServiceImpl.class]: Initialization of bean failed; nested exception is java.lang.NoClassDefFoundError: Could not initialize class $Proxy31
    at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:506)
    at org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87)
    at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:284)
    ... 47 more
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'agentProfileServiceImpl' defined in file [F:\ci\ieasy360-sop\sop-hq-api\target\classes\com\ieasy360\sop\hq\api\service\impl\AgentProfileServiceImpl.class]: Initialization of bean failed; nested exception is java.lang.NoClassDefFoundError: Could not initialize class $Proxy31
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:527)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
    at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294)
    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225)
    at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291)
    at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193)
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.findAutowireCandidates(DefaultListableBeanFactory.java:848)
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:790)
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:707)
    at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:478)
    ... 49 more
Caused by: java.lang.NoClassDefFoundError: Could not initialize class $Proxy31
    at sun.reflect.GeneratedConstructorAccessor59.newInstance(Unknown Source)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
    at java.lang.reflect.Proxy.newProxyInstance(Proxy.java:608)
    at org.springframework.aop.framework.JdkDynamicAopProxy.getProxy(JdkDynamicAopProxy.java:117)
    at org.springframework.aop.framework.ProxyFactory.getProxy(ProxyFactory.java:112)
    at org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.createProxy(AbstractAutoProxyCreator.java:476)
    at org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.wrapIfNecessary(AbstractAutoProxyCreator.java:362)
    at org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.postProcessAfterInitialization(AbstractAutoProxyCreator.java:322)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsAfterInitialization(AbstractAutowireCapableBeanFactory.java:407)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1461)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519)
    ... 58 more

fatal error: NSArray element failed to match the Swift Array Element type only third time

I know, that there is a lot of questions about this problem, but it haven't solved my problem.

This is my function:

func sortedWorkingHours() -> [DBWorkingHours] {

    let result = Array(workingHours).sort {
        return ($0 as DBWorkingHours).createdAt.compare(($1 as DBWorkingHours).createdAt) == NSComparisonResult.OrderedAscending
    }
    print("--->DONE")
    return result
}

This is how I call this function in 3 places for 3 tests:

for workingHoursItem in settings.sortedWorkingHours() { //here is an error by third time
    ...
}

This is what get in console:

Test Case '-[DirectBistroTests.CoreDataLocationTests testCurrentWorkingHoursTitleForLocationWithWorkingTypeOpenForSelectedAndTimeIsIncluded]' started.
...
--->DONE
...
Test Case '-[DirectBistroTests.CoreDataLocationTests testCurrentWorkingHoursTitleForLocationWithWorkingTypeOpenForSelectedAndTimeIsNotIncluded]' started.
...
--->DONE
...

Test Case '-[DirectBistroTests.CoreDataLocationTests testRequestingHoursTitleForTakeawayMode]' started.
--->DONE
fatal error: NSArray element failed to match the Swift Array Element type

Why?

This is failing test:

func testRequestingHoursTitleForTakeawayMode() {

    let location = DBLocation.findOrUpdateLocationWithDictionary(mockDictionaryForLocation(), inContext: context)
    DBWorkingHours.findOrUpdateWorkingHoursWithDictionary(mockDictionaryForCurrentWorkingHours(), inContext: context)
    let requestingHoursTitle = location.requestingHoursTitleForMode(.Takeaway)

    let expectedTitle = "sdf"
    XCTAssertEqual(requestingHoursTitle.string, expectedTitle)
}

To extract a value from one http request and pass to another http request

my issue is i have to extract "key" from that request and pass to another http request, how can i do that? what will be the steps, please guide

enter image description here

Inject Context in ActivityInstrumentationTestCase2

I am writing espresso tests for my Android application. I have an ActivityInstrumentationTestCase2, where I want to inject a context to use a test database instead of the actual database.

The test context I create in the setUp() method like this:

@Before
public void setUp() throws Exception {
    super.setUp();
    RenamingDelegatingContext context = 
            new RenamingDelegatingContext(getContext(), "test_");
    mActivity = getActivity();
    // Now how do I inject the new context to the activity?
}

How can I inject this context into my activity? And if I was able to inject it, will it be the new application context, or which context exactly?

I know that in ActivityUnitTestCase there is a method setActivityContext(Context), but in ActivityInstrumentationTestCase2 it is not there. Is there another way to set the context?

How to specify list size using FactoryBoy

Let's assume i have this model:

class FooContainerModel(object):
    def __init__(self, foos):
        self.foos = foos

I want to be able to decide how many foos will be created at creation time, eg.

model = FooContainerFactory(count=15)

Tried factories like

class FooContainerFactory(factory.Factory):
    class Meta:
        model = FooContainerModel

    foos = factory.List([Foo() for _ in xrange(20)]) # fixed amount
    foos = factory.lazy_attribute(lambda o: [Foo() for _ in xrange(20)]) # same

Of course I can manually create list of Foo() with desired length and instantiate FooContainerModel, but it's not what I want. Any solutions?

Mocking a return value in a sub object

I want to write a test case for a feature within complex data structures. The feature doesn't rely on all the data and getting real instances with the desired properties is hard. Hence, I am using Mocks.

def test_Case:
    t1 = Timeseries(...) # Data
    t2 = Timeseries(...) # Data

    fancy_t1 = Mock(data=d1, additional_property= ...)
    fancy_t2 = Mock(data=d2, additional_property= ...)

    container = Mock(data_sets=[fancy_t1, fancy_t2])

    ret = function_to_test(container)
    assert ret ... # Some condition

Within function_to_test there is a call of the form

container.aggregation.aggregate(fancy_t, more_arguments1, more_arguments2, ... )

What I want that the aggregation.aggregate call to do is quite simple. It is supposed to evaluate to d1+d2 and ignore the fancy_t and the other arguments.

How would I do that?

If I do something like this

agg = Mock()
agg.return_value = d1 + d2
container.aggregation.get_aggregated_positions = agg

It evalutes to something like <Mock name='mock.aggregation.get_aggregated_positions()' id='296569680'> instead of a Timeseries

Unique Key required for each searchId

My script hits X search ids (taken through CSV_Dataset) of a URL. Issue is for each search id its creating a unique "key" for each search id. Since i have recorded the script for 1 and then have added csv for searchIDs, how can i create different "key" that can be attached to the search id and give unique value? Please guideenter image description here enter image description here

Thanks in advance

jeudi 30 juillet 2015

Thucydides/Sereniry Is it possible to put the scenario into a string

Is it possible to get the scenario from .story while running and put it on a string?

Running test cases using lava

I am a newbie using lava test framework. Suppose I have some setup-ready ARM boards and running Linux or Android system, accessible by ssh/adb through network. Is that possible only to run some test suites, for example, gcc testsuites, opencl conformance tests, without deploying the boot/kernel/rootfs images?

If I don't specify the deploy action in json, I will see following information.

<LAVA_DISPATCHER>2015-07-30 04:01:58 PM INFO: General Exception: No operating system deployed. Did you forget to run a deploy action?

Thanks!

How to read text in Coded UI?

In Coded UI, I have to confirm a particular text is present in text-box.

I am able to reach up to the dialog box and i know how to enter text. However. now i need to read text from the box.

Is there a WinApplication command that can help me to do so?

nose.tools.eq_ vs assertEqual

The Problem:

We've been using nose test runner for quite a while.

From time to time, I see our tests having eq_() calls:

eq_(actual, expected)

instead of the common:

self.assertEqual(actual, expected)

The question:

Is there any benefit of using nose.tools.eq_ as opposed to the standard unittest framework's assertEqual()? Are they actually equivalent?


Thoughts:

Well, for one, eq_ is shorter, but it has to be imported from nose.tools which makes the tests dependent on the test runner library which can make it more difficult to switch to a different test runner, say py.test. On the other hand, we are also using @istest, @nottest and @attr nose decorators a lot.

How to pass a value from a fixture to a test with clojure.test?

When using clojure.test's use-fixture, is there a way to pass a value from the fixture function to the test function?

loading modules path error on running javascript test with testee and steal

I am writing test with funcunit and running it with testee. I can run the test successfully on my local machine, but when I run it on jenkins, it has loading error.

testee:html-injector injecting scripts into file +783ms /node_modules/testee/node_modules/launchpad/node_modules/dtrace-provider/package.json
testee:html-injector injecting scripts into file +391ms /node_modules/testee/node_modules/dtrace-provider/package.json
testee:html-injector injecting scripts into file +148ms /steal-qunit.js
testee:html-injector injecting scripts into file +1ms /funcunit.js
testee:runner CONSOLE: error Error loading "steal-qunit" at http://localhost:3996/steal-qunit.js
Error loading "steal-qunit" from "tests/test" at http://localhost:3996/tests/test.js
Not Found: http://localhost:3996/steal-qunit.js undefined
+2ms http://localhost:3996/tests/test.html?__token=8jf57q { browser: 'phantom' }
testee:runner CONSOLE: Potentially unhandled rejection [17] Error loading "steal-qunit" at http://localhost:3996/steal-qunit.js
Error loading "steal-qunit" from "tests/test" at http://localhost:3996/tests/test.js
Not Found: http://localhost:3996/steal-qunit.js (WARNING: non-Error used)
+0ms http://localhost:3996/tests/test.html?__token=8jf57q { browser: 'phantom' }
testee:html-injector injecting scripts into file +54ms /jquery.js
testee:html-injector injecting scripts into file +1ms /steal-qunit.js
testee:html-injector injecting scripts into file +0ms /funcunit.js
testee:runner CONSOLE: loading or loaded
CONSOLE: loading or loaded
CONSOLE: loading or loaded

steal-qunit.js, jquery.js, and funcunit.js should loaded from node_modules directory, but testee is looking at root directory to find them. Can anyone help me out for this issue?

The main test javascript file looks like:

var QUnit = require('steal-qunit'),
    F = require('funcunit');

And each test file has this at the top of the file.

require('jquery');
require('steal-qunit');

Mock User Sign In Rspec - Using Faraday and Github-Api Gems

I am using Faraday and Github-Api gems to log a user in with Github. In order to test any type of functionality, I have to mock this behavior. I am thinking that in my spec, I should say something like "if the new action is called (which is what pings the github-api), then redirect to the create action and pass in the parameters that it needs to create a user." I am not entirely sure how to do this. I realize that there are solutions for Oauth, modules with helper methods to mock this behavior, but I am adding tests to an app that someone else built so it has to stay as is. Any help or pointers would be greatly appreciated. I'm fairly new to testing as well so feel free to comment on the test itself.

Spec

require "rails_helper"

RSpec.feature "User submits a project" do
  scenario "they see the index page for their projects" do

    project = create(:project)

    visit '/projects/new'

    fill_in "project_title", with: project_title
    fill_in "project_project_type", with: project_project_type
    fill_in "project_description", with: project_description
    fill_in "project_starts_at", with: project_starts_at
    click_on "Create Project"

    redirect_to '/projects'

    expect(page).to have_project_title
  end
end

Sessions Controller

def new
  redirect_to "githubapiurlgoeshere"
end

def create
  @user = User.find_or_create_from_auth_hash(auth_hash)
  if @user.save
    session[:token] = @user.token
    session[:user_id] = @user.id
    redirect_to root_path
  else
    flash[:notice] = "message here"
    redirect_to root_path
  end

end

Testing/Automated testing | Quality Assurance in Python

I recently took up a goal to learn Q/A (specifically browsers and web technology) and would like to ask for an advice from You, Python gurus. I came up with a list of questions.

  1. What are the modules you need to know? (list of modules)
  2. What Python tools do you use/are the best? (list of tools)
  3. Few things one should be aware when : write/review test cases & execute/analyze tests

Thank you all.

Cakephp Fixtures - record definition with DB expression

So I've recently started writing tests for my Cake application and I've run into a bit of a snag.

I prefer defining my test data using the $records variable, but one of my tables has a MySQL Point column which is a data type not supported by Cake.

With the normal storing and retrieval I'm able to use a DB expression in the beforeSave of the model to convert it to a usable format:

private function prepareGeoData($data){
    $db = ConnectionManager::getDataSource($this->useDbConfig);
    return db->expression("GeomFromText('POINT(" . $data['x'] . " " . $data['y'] . ")')");
}

While maybe not the best solution, this has been working. My problem is that I can't seem to get DB expressions to work with fixtures.

I've read the docs, particulary this part about Dynamic Data and Fixtures, but when I define my record as so:

 public function init(){
    $db = ConnectionManager::getDataSource('default');
    $this->records = array(
        array(
            'id' => 1
            ,'name' => 'Point 1'
            ,'point' => $db->expression("GeomFromText('POINT(18 36)')")
        )
    );
    parent::init();
)

I got the following error:

Catchable fatal error: Object of class stdClass could not be converted to string in ...\lib\Cake\Model\Datasource\DboSource.php on line 2927

It clearly doesn't like the object returned by $db->expression, but I can't think of any other way to get the data properly inserted...

Any insights are appreciated!

using Cake 2.3

data driven testing approach and tools

I have a new task on hand to design a new test suit for a application driven by huge data(from teradata, ms sql and mysql). we generally use regression testing to compare tables A-B,B-A . is there a better approach for data driven testing or any open source tool that you know about which can help me do that?

thank you ,for your time

Testing Responsive Mobile Websites

We are developing our site as responsive website so it works on the mobile devices. Currently, we are having issues testing the website on mobile devices. The process is extremely tedious since we have to test on iPhone simulators, android simulators, Chrome browsers etc. There are lot of times that everything work fine on all the browsers except on Android 4.2.2 stock browser.

How do big companies test these things? How can we efficiently test on mobile devices specially on Android stock browser?

Also, is there someway to debug JavaScript and CSS on Android Stock browser?

rspec is not saving to the db a calculated value

I have troubles with one test which I'm writing. I have a Budget model where the promise (column) is been calculated from a Method named calculate_budget after a callback after_create.

In the normal dev mode the application is saving to the db but in the current test I grepping some other budget records where the promise is 0 instead of the calculated promise.

here some code to

before(:each) do
  @member1 = FactoryGirl.create(:member)
  @member2 = FactoryGirl.create(:member, id: "2")
  income = FactoryGirl.create(:income, member: @member1)
  donation1 = FactoryGirl.create(:donation1)
  donation2 = FactoryGirl.create(:donation2)
  @budget = FactoryGirl.create(:budget, donation: donation1, member: @member1)
end


  receipt1 = Receipt.create!(id: 1, date: '2015-01-01', member: @member1)
  receipt2 = Receipt.create!(id: 2, date: '2015-02-01', member: @member1)
  receipt1.items << ReceiptItem.create!(id: 1, donation_id: 1, amount: 10, receipt_id: 1)
  receipt2.items << ReceiptItem.create!(id: 2, donation_id: 1, amount: 20, receipt_id: 2)

  budget2 = FactoryGirl.create(:budget, title: 'budget2', start_date: '2016-01-01', end_date: '2016-12-31', donation_id: 1, member: @member1)

  ap "#budget promise => #{@budget.title} promise #{@budget.promise} remaining #{@budget.remainingPromiseCurrentBudget}"
  ap "#budget promise => #{budget2.title} promise #{budget2.promise} remaining #{budget2.remainingPromiseCurrentBudget}"

  @budget.save
  budget2.save

  ap Budget.all
  # debugger
  ap "budget2.get_all_old_budgets: #{budget2.get_all_old_budgets}"

  expect(budget2.remainingPromiseCurrentBudget).to be(210)

Any idea why the is not saving the record correctly?

How to detect untested ruby files?

I recently started working on a large Rails application. Simplecov says test coverage is above 90%. Very good.

However now and again I discover files that are not even loaded by the test suite. These files are actually used in production but for some reason nobody cared enough to even write the simplest test about them. As a result they are not counted in the coverage metrics.

It worries me since there is an unknown amount of code that is likely to break in prod without us noticing.

Am I the only one to have this problem? Is there a well-known solution? Can we have coverage metrics for files not loaded?

URL is creating correctly but result isn't successful

I am creating a script that is hitting a URL and logging it in db (a hit on a url is recorded by the developer in db). Via "Test Script recorder" i have performed the below mentioned things

  1. I recorded the login mechanism
  2. I hit the URL that is required to be hit ( each URL just varies with a number)

Now i have run this script by changing the recorded script with these things

  1. I have done ${Number} to that URL number to be hit and through CSV_Data Config have provided a CSV file
  2. For each URL its working fine and creating URL correctly

The issue is its not giving an error on Jmeter side. URL is correctly being made on Jmeter side, if i click through browser the hit is taken (and logged), but not through Jmeter.

Since URL is being made correctly why isn't it hitting from jmeter? I am confused on this part, please guide

Skip certain steps in a scenario in Cucumber

I have a cuke scenario,

Scenario: Hello World
Then do action one
Then do action two
Then do action three
Then do action four
Then do action five

But depending on the environment variable, I want to skip action three and action four. I know I can go in the step and do a if-else check, but it's not very elegant. Is there any better solution? Thanks :)

Selenium Automation Testing

I have a webpage with a textbox field. A calender icon near it. When i click on the calender icon a calender view is displayed. Its angularjs datepicker. Can anyone provide an example to automate this type of date pickers. (While the automation proceeds it reaches calender then opens the calender and then automation cannot be proceeded.)

Test case for 2 dimension (integer) Convex Hull algorithm

I did a 2 dimension Convex Hull algorithm that takes in input a file of points called filename.csv where every point is on a row, and X and Y coordinates are separated by a tab, now I need some test files and their correct results, so i'll understand if my algorithm is doing well. I need some boundary case, so i'll be sure if this algorithm is working. My algorithm accepts only INTEGER points. Can anyone tell me a website or a software to generate auto test-files and their correct result ?

Need to know how to test the output in database as stated below

i need to know a way of testing or technique for the below scenerio.

Input.txt ---> Processing module --> Output in database tables

Here is how my input.txt(It will be a huge file say 10k lines) looks like timestamp1,organisation1,data1,localtimestamp1,place1 timestamp2,organisation1,data2,localtimestamp2,place1 timestamp3,organisation1,data3,localtimestamp3,place1

I feed this file to processing module and the output will be entered into datbase tables in certain form (eg:- table1 to table10 & each table contains more than 6 columns).

eg:- table 1 column1 column2....... Processeddata1 place1 Processeddata2 place2

So like this i will get the output in db tables.

My question is how to test the output. as input is in the form of packets in .txt file and processed data is in huge rows of database

Here is What i have tried.. 1) Process the input normally (now processed data will be in DB tables) 2) Export all the processed data tables to .csv files. 3) validate these .csv files manually (Only first time). 4) Keep these .csv files as standard reference files(STD). 5) For every release run the process --> export the o/p data from tables to .csv files(Actual). 6) then compare these(Actual) .csv files with already stored .csv files(STD)

If you have any other way of testing please suggest. Please help.

Multiple level of sections in page_objects in nightwatch.js

I have just started out using nightwatch.js , and I am using page_objects to access elements in my tests. So what I was wondering is there anyway we can have sections within sections in page objects? I know that we can specify one level of section. What I have done is something like this :

module.exports = {
  url : 'http://ift.tt/1Mz9PPh',
  sections : {
    topContainer : {
      selector : '.top_container',
      elements : {
        logo : {
          selector : '.logo'
        },
        settingsButton : {
          selector :'.dropdown'
        },
        searchBox : {
          selector : '.search_box'
        },
        sortOrderButton : {
          selector : '.icond'
        }
      }
    },
    library : {
      selector : '.library',
      bookList : {
        selector : 'ul.library_container'
      }
    }
  }
};

Can we have sections inside sections , and in case not , how do we select in test case with @variable

client.elements('css selector','@top_container ul.dropdown-menu li', function (result) {
      if ( result.value.length == 3 ) {
        this.verify.ok(result.value.length, '3 languages loaded');
      }
    });

Thanks !

PHPUnit ingore code execute'ion

I have some var_dumps in my php code (i understand what there must be none in the end, but still), and while tests are running they outputs non necessary information to console, is there a method to ignore some code execution?

I've tried

/**
 * @codeCoverageIgnore
 */

and

// @codeCoverageIgnoreStart
print '*';
// @codeCoverageIgnoreEnd

But this just ignores coverage, and still executes the code.

mercredi 29 juillet 2015

Approval tests run in different locations depending on the level of execution

I have a test project with two directories in containing approval tests. When running the tests in each directories individually the approved files are searched for in the correct directory, the directory where the tests are placed.

But when running all tests in the project the test in dir 2 fails. It seems as all tests are executed from dir 1 as new and empty approved files are created in dir 1.

What is causing this behavior? I would like to have the same behavior independent of which level the tests are started from.

This is a testQuestion

g#5BqgGWv5NFK6fEE/zvlGIRr245#K~9RgDLWBcCV3HoLohLyzwCkrxaiZLzvdp5EiUxj3P7H#Qxlx5O/RaoFbpUBwV7YnHtxMbN ~5xHs6lhqwIfTxuV/7fZBenZk60L2+hNd3t5L4a32Wv7sTKxTDnJcjOcJBqd7DRmE5U6wpNTsez6xXYh6M/di5st9J0YbqBOs7SG 7SKK05J7rNZDYvVU0vKWG6TOCko9scbtvZT1yDXGa5szjtE2QLHgZnJV1SlIS+qi2WmkJJCRis7zvcPq/sJej1lgsCvN2Wxe76TT Fpo~W7x#m0+x+DOkKGzICOGjVOWm0goqST4EWFKgg#1yVMMTODiGFrlCf~Kq0fSEs+LZnKSPvWzJhndtgFT449~1gNKlzFJu1lFk prKDk#OEJwzohaY27qdU5MfhCdtp6QxqEMO2+NkF7sdV/hGle+13VOQprF1eNCcgXDtq4XmxvlH1cy/1qmP3Q~1YSMjJe2lGBrDU uqVDfXiw4#wvPE+vkMQUpVE4fHHvWK5F7wQtn0bm0qm#pfT9rSTgxw4nq5kaGEr3kTay4aDYm/r5YY#llrMsLkcfKknRWQ6nPz7n jMTtraaBdeMirPstfHh~LUteQqOZDpxN0Qmdik4TsG4PPlpp3ru2s~MSUnT#cgDxqbzZeQmx3h~2mzN5GL1vsGEflSVnOQlOjDHv xipPYn9sOmCD0FgNyC6E6nyfeJczGGjSIwevVsVkK0i~Rq1#VyNyiQD1Gj/cWci4YQvl#35LLiERPsi7ciY#uwjLOaCczLCYS1LE OWl6W+kjHZTqQuWMZi/eJ4P46Z9s2bnszc2PT9bLd6#~wWfL+3Z4QUV9zfq9lmgK9zOoONYbQW5oqgcsuHSZL1~gc~ksoRgQfKPR wjuVl6D2RxLLws9MXdO4Ei#e9IvEHO3Tsx+3UwKbl/7EcGkdqwoJtN3#1Xsz3MyYh1SOpi9HcGbMiimpQvsJY5FN#vpMqmJhxP59 5GnvoIXuM6T4QLkIj6QdCRmGN3spC7tXc0yIgDP5r2buOn6ckkiZe9YTOFyp5QeM0oKQW7r99PTUa+qV6QO0Pb5U9Jbtq3PLz#+y IJT4uZIW~GZqn3R5VoQVoewGBKlwL4W2qU0/loO5uqXEr4JisPh1X3oaq35yvuYLd~bUb1f59ztQ3vPlFy1BJX96VEGzN1gsDM/2 RkUJCNNUL7kWjG~cXJJwbaEX~Vf+wWvJrWrHXqcHMqe~jND51tXMiwt5l46IUwPII/53qZ7HMXlGCI+zXNpdWe+pZw0Y/uDVo#eC lkYvq3ng5oUy9BmGxS3+J/sRUOMQwcmjaXxKRazf/pExceBWiMhSnaFzuFy+fRxPdqSXGhSCfvj0PywZyTSlUjDL#pzDNyrIFsII ar/32SDkjvP0zbrY3xtDHsD9cvy9xwkuWk4BU6Rpkol2aWdjScwIgjVTMLWMwskfr2jhSgXYfv~Gf6EzV0IZ7+3jn9CjwSmqFy3J Y2EoSpxKvmwulPKjD5XOT6ZQZaUHKob1wc2DLuUXzfrvihDdRp4leifUNK65EWSrdo64jRhDoDS9z37Uj3YqDOz9k0bINjCSym7Q TyN0TZR6du0jQyTcw2pFBJv1#IZwHMQDHQXmaUVmjuHcFwo#eod0+vHYByulL~w09iJVfj4f5r200owxmd7g7ThNQvMzFLgkWWqV iI~#eWgKbT9D+wRBGcm7Pl~CV9yQ+6kErcvR0sEDzg1KjUJu7d#REOVShdk6FV326QfD2qYTLL9meJo/wKiizFRI6jRoDFDw600x y3nDBmmoTzBgxiD~evn3YtemQ7woXXsxNcm0L~Ym2YBI9HxZ5DcBbwCh2vlvCUyjYIghsVwh60NqcoUN23W241d/IsfW9Fd+oF11 bqXrN#n/mlt2YaHHjwH7v0PBRMdYuVCqwLjn4xGUieR+iajnZ3#R~tE0JbGFzCJ~~R#rL/zbBoKyTcfeYB5e1uUsxrpGGuyS0ZOU ln9c/gfDvOW7f+k7DQk#E3FXMP+bXFn6fX5XDqa+EbBe2FXZmU2B~VXQYFbN##YClYbDHo+GGmeWTeDDiFBqK0#5aZmapT4YJIgk mZEZxmk3PK0Ere3mCQiRkktNgl#jzm3yX7YIzczdIuNZXjiYXjD52cV6s71B7~ZFpFTqW#eKNWMv5c5Myl7Dvqe5UZtOtdgNsMt4

KcN3QzSZZCzmLBuE09Ikg4/HUbxCsJejnWC7bKsqxRPT2CW+PV1RXRlf5QcEdL03cHmMXRhHCX0Qo16/k+WzB3Gzp9J+wZL#uNV

Hjx#ss9hho/#pQ+bhlZKDpLWWp/Xo/L#Rm#IfLVEY#KRjrZ~Q#PPWBTrY0VfGiZbVQJyWYLSavFN5FZkuKflCgt6SlvZFzSjmKy

xhp/EqHnOEC4K0vP#R4UBrqZEMyWa#asGgiHfucZE0wNHhEbi~i55OC+gjQaMQwrPb#DeRzoSzJBlcK1FfGT9XJGk3PG93344PXD sg+eqwgl/wEt~nIVKbDKXuvYv1PV~k3b5#s6dtyXz/rvqvtIH03dY5S7qs+/Pef3HQTLwG+TBcyo1gie/GEoguajFGMSYngRa2Nc Zjk7C+TrqnRPWhGUhRZNqWaUloIUZi0+sGfSP+DH1xxGz11X/OHvyyHeZj66fwOLICqxJH2/xRkp5Wk+ud#j94LNxFWQaY0o+VxM Ui4nYn4Kg3FCcQKFPPPc#btEDdmMt6kizx2ZuG/vBk7r3Fc3R+V4jeH2ESiPfe5jphT7iy9zZwGIv7N/srkRVOv3gMyNPerrilYq ER3#TEPCm#zvkV+1ckcItxyNJpqWj#7L1BdBjH9#M/fR#rKx5daBk#ZOxncxedUjGCfSVwjl/Dp5ql~nUIYtsKguIzkMV6~3eGcB NyNKU1SE+aNwffhaOB6JjOKV/FJWeZDvsD0Qlvkl~/7QOs+XGMZ+k1rZMBhRbcKSdbcPYHCP3olPjuLyMdt1cbS#dv0HvD10QNzG +Q/wgTuVIFRf~s#/x#~YuSO6TvWPynVRDmpKT1OLeCzGgUxFjueH#hF145S4Ei9ziwXBRiQoK6qeRFu~PLxQsxsKHhm9kbkor4ie MTNIBsnh#FNgmmvKw4umtop4UFvDOyjJ4OVp7TK4EC#KW3M44dKgSRBX#5Z~w+vIz#D614auz5EDPrwLd0T17Y6jG0Uls+qE6HId Ldd++UEUNwE2UgJjQ6~i~rBSFFLmpSBwsq#Yl26Wj#zsao3k0vkUeQ+ZTv#i#jTtBkRWXCPw1c/KX5dT1ju1~Fn9pIhHWuM1e3/w 7uWy/BOW/a9sdHlcr4IN2YghJlM5W46LueM7nwjTyVxf9~wnJPFyZOtb3ol7j/5ZmpQxItEmkrH1q9ic515pNRc2Xgv0ePT+0Fu6 LCe~ZOTUwM3rQodXE2WOERC5GWMRODmu~iclWThmLREf#pJmmc49WU2xOp/6LMBr6F5RyJlQjGYGHw39ql~MkzxQQvhVocmOIUoJ 1VLCCgd4LBq/SP1BSt0U0V2qCouPKwv9mK5+hpalw6+lDlU1~lfoQVt2c7qQTz2sZNEdZwNNKrpNQ#Daq/S0dTn2vwRH33NN2#LX Dn+GR~a4TWpTmEy7dauFg0TEijUJDm3m9K4N+optZ9uwWn0VNTSQeG~dOHgNr/RYE0CUu1ImB/TvV9ZlGFdEPiknY4rlvXeHaLx9 EutsqoTkyzsfNlO~Z3L9UUePCo/Kn1#rrYchf7vhB9ROq23ly#m99PoTs7Ov6/~xFG#FlhgKpLp0JSJz5K#Uynl6GeYN~dFi46i~ zLI0r#aQz03OxP#VqrkvYX3vq5b4u~zpWJJ9Q+OrZiJk1O5XSFM/pNxNkGeqBJEIIXUYPVTP/tuUyfx#9+w~ilHvJpt3yFYOr/Tv JqXGnxin25TEfhV0FlhmN03TU+s4M6+zKmyqKMZ/R/+Cvi+/oDLe#c/E44WKVypbbXbc5JUtgCP/32#edMz9zKDDZiGN7oZgev6x kue2MmNVpaBq~52tG6cn9VmWn3jE4zOcdZQNoPLSG7k~~Q1~mc7S2Z0xKBogDMf~j5ptU#cq3YccqrTuf10hRri7SFXVpTV30uXn ZvbhguhlE#W+B~eKuQPtEVfsBLQ6zUOQCpC~o5N0I7Kwho~#4V/lDD7BgaH19+u~stTJZU~0~HWom2tQM+a~mY6U43j3d/wUXQs1 HHfqb0FylaiT5w4MekMIintabe7jPBd54vQskt7GT1Ud9iUO5#UqeJJER1RtQEizhWShrDdIyaD+K7l5Qfqubb0hjuy9P#l~~BSI oTxqLMew21qF#####IEzB64Pcp2Qvjcsa7PETEEvcwOvOS6LnhcEyjyn0DcHGMIZLPGvZ9Fb+o3JSDJUqCDC9JtrSUcGcPDyNOIV HhU~hmB2X0jNB06W6gvqWHP#enmXexPF2xU6gOaNpYom7hgbsH6p/o/amezuiS5r65BR95cSnO6Zgz0964D/dIOIOalcWvQsn3ww xTe/zxw6fpL3n/RDNFYwZq9KGfjLMuO2YJ2bq1JUShQ+VVCyx2t5lNQYSoPowFbv4v9NLEuyECRk+JsKbiOT5GXeJzy4WPzbusrk 3zWqdbWejDtZcQNrQbmQ7DlZp46NYrD77oC1pc/mR4IvwC/k3JNk7fEe#MmQrqMmglLXi3Fp3PqjjM#0DDFBNpSErhMx5vVEujYc wDuHJRJjM+61BSxUug0zJQ#xw9XQukza3S+MZ9KZV5GapzMcHa2HvPfk4aFB1rKhfr07/TCiHfx#HEZUwi5kQn/jXtFwEeXz#gTZ RD/cc/97HHBvJ1NJSfot+xFbOpmGyIHbcez6SmR6mfP5hP~wXLMN1sLoCzLnzYffvHBIgb6yOSiDnO+YBW5tGqExSdxb6PQo~jLa LCy~jvkik/fsT35hHRraRVzyVwx/QIIRMeW#jqRGp3zEPGVweNcme50qy5h9V~ugPvOCTUnLnh3RaYWU#WFNTaEn+swEh6hN2IWt bHCnkPR5xOsdsutGLbgmV7rV0wj~ZvS9BNOOxgub3DX#BK0PUbn#6xm6eMEh64ePhIXiN/kiopNy3jeo6G#9irTmhxZi+j7#Mi#v Vcm2DirNfQHIp4rYUJ~vw#m4BQX5eC1EmKBiobjWEEXd9+p~PEaeek~qcXwP1aI~NcTyxM2aRgsOaDXvc7nSVwQH24/HR7S03IFs Mf~xp0cz#kZI/nbqJWzGxsZoaJT44nV+iw~FoERadX4Mvvf#9RTZZKlOjT#XaqmastmojX~36sbpMG+Wv#7RNkoW54cVhsD2xqIv dg#XBqEG#QmC5Q#HCwE##SMD#QEFXQ##E##MjMoK#bHm#6I###==

How do I load fixtures from third-party app for django in tests?

I know fixtures can be loaded in the tests.py like this:

fixtures = ['example.json']

Django automatically search all the apps' fixture directory and load the fixtures.

However, I create a reusable app named 'accounts' and in accounts I also have the fixtures/example.json file. But when I installed the 'accounts' app and write it into the INSTALLED_APP settings, the fixture cannot be loaded. I am curious why it happens.

Django == 1.8.2

Testing an async function with Jasmine in Meteor

I have looked at several other questions related to this on Stackoverflow, but I still can't seem to solve my problem. No matter what I seem to do, it seems that either Meteor.call doesn't get invoked, or if I can get it to be invoked (such as in the code sample below), no matter what the jasmine.DEFAULT_TIMEOUT_INTERVAL is set to, I continue to get the following error:

Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.

This is what my Jasmine test looks like:

it("Should be created and not assigned to anyone", function(done) {
    jasmine.DEFAULT_TIMEOUT_INTERVAL = 5000000;

    // Confirm that the User Has Logged in
    expect(Meteor.userId()).not.toBeNull();

    var contact = null;
    var text = "This is a testing task";
    spyOn(Tasks, "insert");
    spyOn(Meteor, "call");

    Meteor.call('addTask', contact, text, function(error, result) {
      expect(error).toBeUndefined();
      expect(result).not.toBeNull();
      done();
    });

    expect(Meteor.call).toHaveBeenCalled();

  });

});

And my addTask function looks like this:

Meteor.methods({

  addTask: function (contact, text) {
     ... // addTask Code, removed for brevity
  },
});

Iv been stuck on this for weeks, any help anyone can provide would be super helpful.

How are you supposed to unit test a Spring REST API

I have written a REST API that communicates with a MongoDB to retrieve user information from it. Now, I have actually written integration tests in JUNIT using REST-Assured and the rest API as well as MongoDB are functioning together as expected.

I had overlooked the part of actual unit testing. Now, I am supposed to run a maven build which also automates the test process.

It seems like I have to skip the mvn test phase of the build and execute my tests in the mvn integration-test phase correct?

I am trying to do an if statement with the input to be 21 and the result be Time printed out

So first of all I want to know is there something that goes if Hello = 18 print Hello.

My code looks like this I want it to print out time which is already a working variable. I want if Hello = 21 but it doesn't do =. I also want to know why I can't just do print Time.

Hello = (input("What is 10 + 9 ")) if Hello > 20 print Time`

Maven build - run an application then perform JUnit tests

As part of my build process, I would like maven to start a program and then execute JUNIT tests in the same phase, which is the mvn test phase.

I am using Spring Boot as well as the Spring Boot Maven plugin. As you know Spring boot has an embedded tomcat container. So I can just run my application on this tomcat server by just running as an application in eclipse.

For example when I run mvn test, I would like my application to run FIRST and then have the tests executed.

I have seen the exec-maven plugin being used and that you specify the phase in the maven build cycle that you want the application to run. The earliest phase that it allows you to specify is the test phase.

But I'm not sure if the Application will run immediately before the tests?

How do I specify in my POM file that I want the application to run BEFORE the test phase?

Testing Flask routes do and don't exist

I'm creating a large number of Flask routes using regular expressions. I'd like to have a unit test that checks that the correct routes exist and that incorrect routes 404.

One way of doing this would be to spin up a local server and use urllib2.urlopen or the like. However, I'd like to be able to run this test on Travis, and I'm assuming that's not an option.

Is there another way for me to test routes on my application?

Karma Jasmine: Executed 0 of 0 Error

I am getting into Angular Testing with Karma and Jasmine. After doing the karma init and writing the first test for a home controller, I keep getting Executed 0 of 0 ERROR. It does not seem like it's being picked up in the files.

module.exports = function(config) {
    config.set({

    basePath: '',

    frameworks: ['jasmine'],
    files: [
        'public/assets/libs/angular/angular.min.js',
        'bower_components/angular-mocks/angular-mocks.js',
        'public/app/app.module.js',
        'public/app/app.config.js',
        'public/app/**/*.js',
        'test/unit/**/*.spec.js'
    ],

    exclude: [
    ],

    preprocessors: {
    },

    reporters: ['progress'],

    port: 3030,

    colors: true,

    logLevel: config.LOG_INFO,

    autoWatch: true,

    browsers: ['Chrome'],

    singleRun: false

    }); //config.set
} //module.export

And the HomeController:

(function() {
'use strict';

angular
    .module('app')
    .controller('HomeController', HomeController);

HomeController.$inject = ['$scope', '$log'];

function HomeController($scope, $log) {
    /*jshint validthis: true*/
    var vm = this;

    vm.message = 'Hello World';

} //HomeController()

})(); //Controller

Thanks for helping.

Testing a file upload to a form in a Behat feature file

I am quite new to writing Behat test suites and I am currently trying to flesh out my existing feature file with an added test to test an uploaded file.

This is what I have come up with so far.

  Scenario: Submitting a valid asset form and uploading a file
    When I submit a asset form with values:
      | name               | type  | position | active | file                                 |
      | St Andrews Release | image | 1        | 1      | /web/images/product/icon/default.jpg |
    Then the form should be valid
    And the entity form entity should have the following values
      | name               | type  | position | active | file                                 |
      | St Andrews Release | image | 1        | 1      | /web/images/product/icon/default.jpg |
      Failed asserting that null matches expected '/web/images/product/icon/default.jpg'.
    And the entity form entity should be persisted correctly

This is the method handling the scenario:

   /**
     * @When I submit a asset form with values:
     */
    public function iSubmitAssetFormWithValues(TableNode $table)
    {
        $data       = $table->getColumnsHash()[0];
        $this->form = $this->submitTheForm('crmpicco.asset.type', $this->entity, $data);
    }

The submitTheForm method returns a Symfony\Component\Form\FormInterface.

Am I along the right lines? I am currently getting an error:

Failed asserting that null matches expected '/web/images/product/swatch/default.jpg'.

Adding interceptors on endpoints in a live camelContext without any CamelTestSupport

I am trying to add camel testcases for an application that will be deployed on fuse esb. I presently have testcases based on CamelBlueprintTestSupport. I add Interceptors on the routes and endpoints and do my assertions.

I am now exploring the possibility of doing similar testcases using pax-exam so that the testcase can run directly on the fuse environment. I have setup my configuration for pax exam so that it loads all my bundles and config files and the camel routes are up and running.

But since I have to use the camelContext provided by my bundle, I can no longer use CamelBlueprintTestSupport or CamelTestSupport for that matter as both of these will create their own contexts rather than use the one provided by my OSGI bundle.

@RunWith(PaxExam.class)
@ExamReactorStrategy(ActiveMQPerClass.class)
public class XyzIT extends PaxEndToEndTestSupport{  

@Inject
protected BundleContext bundleContext;

protected MockEndpoint some_endpoint;

@Inject
@Filter("(camel.context.name=myCamelContext)")
protected CamelContext camelContext;

@Before
public void configureCamelContext(){        
    try{                    

        this.camelContext.addRoutes(new RouteBuilder() {
            @Override
            public void configure() throws Exception {                  
                interceptSendToEndpoint("direct:utilities:sm_route").id("sm_mock_intercept").to("mock:sm_mock");
                from("myQ:queue:{{jms.queue.error.uri}}").to("mock:error_uri");
            }
        });
    }
    catch(Exception ex){
        ex.printStackTrace();
    }       

}

@Test
public void xyzTest() throws Exception {

    //Some Test
}

}

THIS WILL NOT WORK

In such a scenario, How do I modify the camelContext provided by my bundle and add interceptors to it. I cannot set the isUseAdviceWith flag and so no advices will work. May be I am not supposed to use PAX-Exam in this way but is there any way to coax CamelContext to add intercepts or modify it in anyway without CamelTestSupport.

How to run ProGuard on Android tests?

We have a project with some library (and native) dependencies, like this:

Native SDK <- Library (Wrapper) <- Main Project

To start with, this structure cannot be changed, as we are reusing the parts. The problem I am facing is passing the 65k reference limit. This, of course, has a workaround - enable ProGuard. With it enabled, the project compiles.

Since we are transitioning to the Android's default testing framework, we need to add some more dependencies to the testing config, so in dependencies we now have:

compile 'com.google.android.gms:play-services-base:7.5.0'
compile 'com.google.android.gms:play-services-gcm:7.5.0'
compile 'com.google.android.gms:play-services-safetynet:7.5.0'
compile 'com.android.support:appcompat-v7:22.2.1'
compile 'com.android.support:recyclerview-v7:22.2.1'

compile files('libs/small-library1.jar')
compile files('libs/small-library2.jar')
compile files('libs/small-library3.jar')
compile files('libs/medium-library1.jar')
compile files('libs/medium-library2.jar')
compile files('libs/medium-library3.jar')
compile files('libs/huge-library1.jar')
compile files('libs/huge-library2.jar')

androidTestCompile 'com.android.support.test:runner:0.3'
androidTestCompile 'com.android.support.test:rules:0.3'
androidTestCompile 'com.android.support.test.espresso:espresso-core:2.2'

We are using SDK (API) 22, so everything is pretty much at the latest version. The problem is, our native SDK has a bunch of code in the protobuf layer, and the library wrapper is big. With all other JARs, we are too high over the 65k limit (but as I said, ProGuard fixes this barely). Multi-dex is out of the question as it works well only on Android 5.0+.

We're trying to reduce the codebase, but even then, Android Tests are failing with method reference overflow issues (i.e. not compiling).

Is there any way to enable ProGuard for tests as well?

Thanks!

XCTest and KIF Common Methods in Separate Class

So I'm writing UI Tests with KIF and XCTest. There is a lot of duplicate code in my test cases so I decided to subclass XCTest and put all my common code there. The problem I'm running into is that if an assertion fails in the common code (super class) the test fails but gives no indication as to what failed. Any ideas on how to pass the failed assertion up to the test that called it and flag the message?

Automated Tests Java SOAP

I 'm looking framework it is possible to create automated test for SOAP application wrote in jax- ws . I read about some frameworks / api but I need make a decision . Anyone have opinion about this frameworks : ReCrash , Citrus, Randoop and RobotFramework.

Thanks all ;)

What is the best practice for usability testing for website?

What is the best practice for usability testing for website?

Group based testing in protractor and jasmine as we do it using TestNG

We group Test case in TestNG using.

@Test( groups = {"xyz", "test" })
@Test( groups = {"Testing" })

If we call this in xml configuration it will run only that group test cases, reaming won't execute . BY calling it in XML as shown below.

<groups>
<run>
    <include name="Testing"/>
</run>

.

Like this is there any behavior available in Jasmine Framework for grouping test cases with out writing test cases in separate place.

Jpa: Generating test data from entities

We have some dynamically generated dao classes for all the tables in the database. Basically they all do same CRUD operations on the respective tables.

I'm looking to see how I can test this generated code. Immediate idea is to populate some test data and launch these CRUD operations. But being new to jpa/hiberate, I have no clue how I can populate test data. I neither know the table name nor the fields. All I have is the dao library (I don't know the entity names either as they are generated dynamically). Any approach on testing would be much appreciated. TIA

how to test option item in select attribute using protractor?

I am trying to test items text in select options but my test gets failed and give me error here is my spec:

it('should test the sorting_options text', function() {

  expect(element.all((by.id('sorting_options')).Last().text).toBe('Score');
});

here is Error i received :

C: \wamp\ www\ First - angular - App > protractor conf.js

Starting selenium standalone server...
[launcher] Running 1 instances of WebDriver
Selenium standalone server started at http: //192.168.100.9:31794/wd/hub
[launcher] Error: C: \wamp\ www\ First - angular - App\ protractorSpec\        spec.js: 38

how can in resolve this issue?

How to get column data from a grid using protractor?

I am not able to access the data in a column of grid.

Can anyone suggest method other than the one below:

element.all(by.repeater('col in colContainer.renderedColumns track by col.uid').column('Entity'))
    .getText()
    .then(console.log);

OCMock - how to mock accessibility in iOS

I am using xcode and my mocking frameworks is OCMock. How can i use OCMock to mock that accessibility is turned on so i can run some simple accessibility UI tests ?

Should i mock UIAccessibilityIsVoiceOverRunning() if so how would i do that ?

i tried the following but it wont compile:

__block id mockClass = OCMClassMock (UIAccessibility.class);

it gives an error "use of undeclared identifer UIAccessibility". and that makes sense because its not a class. My end goal is to mock UIAccessibilityIsVoiceOverRunning() method , thats it.

jQuery .val() doesn´t make a 'change'

I have problems testing my JavaScript. A part of my test looks like

$('#activityType').val("33");
$('#favorite').click();

The $('#activityType') is a select field and I want to select the option with value "33". Now I expected, that this would be a change, so that my function in my program like

$('body').on('change', '.item-select', function() {
    var itemRow = $(this).parent().parent();
    changeBookableItemInputFields(itemRow);
});

will be executed.

The $('#activityType') has got the class-attribute item-select, so I don´t understand, why $('#activityType').val("33"); is no change. I changed the value and the css-attribute is there. The body should be able to find it and the function should be executed.

Can anybody tell me, why it doesn´t work?

Unit testing: How can i import test classes dynamically and run?

I'm doing a simple script to run and test my code. How can i import dinamically and run my test classes?

Google Play Alpha Version needs unintall first

I have a published app and just uploaded an alpha version with higher version no. I have also defined a google group as a tester group. The members in this group should unistall and install again my app in order to get the new alpha version? Or this is done automatically?

Jmeter: To login multiple times and hitting multiple URLS

I am novice in Jmeter, just started to know its inner functionality. I am stuck in a problem. I have to hit multiple urls (only search id) is changed so in "HTTP Request" i have placed "/build-4.4.10.0/?earchId=${ID}&Application=sc&IsSearchLink=TRUE"

I am providing session key and that search id through csv file. Problem is though its going to the link but redirecting it to login page, and i do not know how to create users on run time and assign to that each URL.

I have 200+ URLS, what should i do, please guide

Thanks

MinitTest Testcase fails when I run complete suite, succeeds when I run it alone

I am deeply confused about my tests and one testcase specifically:

When I run all of my integration tests together this specific testcase gives me this error: UsersSignupCapybaraTest test_signup_process_with_capybara ERROR (5.16s) Capybara::ElementNotFound: Unable to find link or button "Sign up now!"

When running just this one test it passes: UsersSignupCapybaraTest test_signup_process_with_capybara PASS (10.19s)

Can someone explain this to me? I asked a similar question yesterday here. I think I am not understanding some basic mechanism of my tests. Am I wrong, assuming that each testcase is tested isolated? Or does one start, where there previous stopped? That wouldn't make sense as I would have to take care of the order they get executed, which sounds not right to me.

Here is the file containing the testcase:

require 'test_helper' class UsersSignupCapybaraTest < ActionDispatch::IntegrationTest def setup Capybara.register_driver :selenium_chrome do |app| Capybara::Selenium::Driver.new(app, :browser => :chrome) end Capybara.current_driver = :selenium_chrome end test "signup process with capybara" do visit root_path click_on "Sign up now!" fill_in "user_name", with: "Neuer User" fill_in "user_email", with: "neuer@user.de" # more code ... end end

Testing a date program, over a long time

I wonder if it exist a way to test a real time programme on a short time.

I have a program which execute tasks 1 times in a year, or 1 times in a month, 1 time a day, many times in an hour, and so on. And all this tasks are mixed.

I can test it by changing the frequency (for exemple 1 times in a month becomes 1 times in a minute... I like to do many tests). But I think that some mistakes could be hidden by this process.

How are doing developpers ?

I thought about a Virtual Machine with a higher speed in the time system ?

I had quickly look the testing software, but I haven't find what I was looking for.

Hibernate : include configuration files in WEB-INF in tests

I have a web app and i am trying to set up the config for the unit tests. I have the following structure

project
-src/main/java
-src/main/resources
-src/test/java
-src/test/resources
-src
  -main
    -webapp
      -WEB-INF
        -spring
  -test
    -spring

All my spring configuration files are stored in the project\src\main\webapp\WEB-INF\spring directory. But the issue is that my test configuration files are stored in the project\src\test\spring directory.

For my tests i want to use some of the configuration files in the project\src\main\webapp\WEB-INF\spring directory but i keep getting a file not found exception when i try access them.

Is there a way to keep my configuration files in the WEB-INF folder but still visible to my test configuration files?

Ruby Conditional Test

I can't make my code pass this test:

it "translates two words" do
    s = translate("eat pie")
    s.should == "eatay iepay"
  end

I don't see the flaw in my logic, though it may be very brute force and there may be a simpler way of passing the test:

def translate(string)
    string_array = string.split
    string_length = string_array.size
    i=0

    while i < string_length
        word = string_array[i]
        if word[0] == ("a" || "e" || "i" || "o" || "u")
            word = word + "ay"
            string_array[i] = word

        elsif word[0] != ( "a" || "e" || "i" || "o" || "u" ) && word[1] != ( "a" || "e" || "i" || "o" || "u" )
            word_length = word.length-1
            word = word[2..word_length]+word[0]+word[1]+"ay"
            string_array[i] = word

        elsif word[0] != ( "a" || "e" || "i" || "o" || "u" )
            word_length = word.length-1
            word = word[1..word_length]+word[0]+"ay"
            string_array[i] = word
        end

        i += 1
    end
    return string_array.join(" ")
end

Here's the test failure message:

Failures:

 1) #translate translates two words
     Failure/Error: s.should == "eatay iepay"
       expected: "eatay iepay"
            got: "ateay epiay" (using ==)
     # ./04_pig_latin/pig_latin_spec.rb:41:in `block (2 levels) in <top (required)>'

The additional code checking other conditions are for other tests that I already passed. Basically, now I'm checking a string with two words.

Please let me know how I can make the code pass the test. Thank you in advance!

SAP HANA users/roles privilige check

Please help me with my case.

We have SAP HANA implemented in our company.

There were few custom roles created based on this doc provided by SAP: http://ift.tt/1Kyqxem.

We have few test users created with different roles assigned (like: transport executor, transport manager). What is the recommended way for testing if these roles are properly configured? We were thinking about XSUnit, but we have no clue what checks should be performed in order to verify the priviliges.

Is this approach proper? Any ideas will be much appreciated.

Thank you.

jQuery .click() is clicking, but it shouldn´t

while testing my JavaScript I have the following problem:

$('#idOfMyElement').click();

is executed. But I want to verfiy with my test, that is is not executed, because it has the following CSS:

<span style="cursor: not-allowed; pointer-events: none;" id="idOfMyElement"></span>

I debugged it and it is sure, that when the .click() is executed, it has DEFINITELY the mentioned CSS-attributes. In my normal program it works (means that the click doesn´t work), but in my test the click works, even if it shouldn´t.

I have no clou, what might be the problem. Thanks for your help!

mardi 28 juillet 2015

Nightly build in node js

I need some suggestion about tools and module regrading node js. I want to set up nightly build system on my server (My local machine). My motive is testing REST APIs and socket APIs. I am searching tools for it especially socket testing.

I need some extra stuff like code coverage, testing report etc. Is it any tool available in node-js like jenkins + their plugin, JUnit/PHPUnit, phing/puppet and especially should be open source.

Thanks in advance. :)

Jmeter recorded requests have server response time different from those for real requests

I've recorded a Scenario with Jmeter Recorder (also used HTTP Authorization and Cookie managers in the Thread). It was and iPad app that sent synchronization requests to the server and get back the data.

Server logs says that my "real" requests from the iPad took from server at about 2 seconds to be processed.

1.741 seconds - [29/Jul/2015:05:46:11 +0000] "GET /***/sync/790833 HTTP/1.0" 200 659338 1.704 seconds - [29/Jul/2015:05:46:34 +0000] "GET /***/sync/790834 HTTP/1.0" 200 31

However, when I run that requests recording, server took much less time to process them:

0.044 seconds - [29/Jul/2015:05:47:13 +0000] "GET /***/790833 HTTP/1.0" 200 470409 0.041 seconds - [29/Jul/2015:05:47:14 +0000] "GET /***/790834 HTTP/1.0" 200 470409

Moreover, the response size is different.

What can be the reason, why the server treats the requests (that seem to be equal) differently?

is testing compulsory if it works fine on realtime on browser

I am working for a company who wants me to test and cover every piece of code I have. My code works properly from browser. There is no error no fault.

Except my code works properly on browser and my system is responding properly do I need to do testing? Is it compulsory to do testing?

What file names should I test my application with?

My application involves file loading and saving (windows only), the user can enter any name they like.

I want to ensure I have enough checks and validation to prevent any errors occurring due to the name that the user selects.

What are some names that commonly cause problems, for which I should test that my application will work with?

Is it possible to send a reason for Jasmine 2 specs skipped with xit or pending()?

When we find a bug with one of our Protractor Jasmine2 specs, we usually want to skip the test until the bug has been resolved.

I know how to do this with xit or pending(), and JasmineReporters TerminalReporter is doing a nice job of color highlighting and listing pending specs.

However, the pending tests always report No reason given, which implies it is possible to give a reason for the skipped test.

I currently comment the spec with an issue number, but it would be really nice to report the reason the test was disabled and the issue number.

ember.js vs angular.js performance testing

I am trying to make an intranet based JavaScript App which can receive and submit data via API. However, my issue is the server also accessed remotely and performance is very poor.

Due to processes within the company they will only be accessing my app through a web browser on the server itself via Remote Desktop. Because of this there are constraints on performance which need to be considered, so I am querying as to what is the most suitable low-overhead and high-performing JS framework to use.

I have studied the following finding ( http://ift.tt/1KuGzG6 ), however still I am not sure which JavaScript framework to choose or even use any JavaScript framework at all.

So I am interested to know, what is a proper way to test the performance of angular.js vs ember.js vs raw JS?

What sort of tools are available to use for performance testing? What my test cases should be?

How to use protractor to verify if two spans are equal in different locations?

<span ng-bind="locations.selectedCount" class="ng-binding">1005</span>

<span ng-bind="locations.selectedCount" class="ng-binding">1005</span>

How would I verify through protractor that the value of these two spans are the same when one span is under an tag while the other is under a label tag in different places?

is it using the 'equal' element?

TestNG: RetryAnalyzer, dependent groups are skipped if a test succeeds upon retry

I have a RetryAnalyzer and RetryListener. In RetryListener onTestFailure, I check if the test is retryable, if yes I set the result to SUCCESS. I also do, testResult.getTestContext().getFailedMethods.removeResult(testResult) in this method.

I again remove failed results (with valid if conditions) in onFinish method in the listener.

Now the problem I am running into is, I made each test class into groups. One test class does the WRITES and one test class does the READS. So READs group depends on WRITES.

If a test case fails on 1st attempts and succeeds on retrying, then all the test cases in the dependent group are SKIPPED, despite removing failed result in onTestFailure method.

Is there a way to run dependent method if a test case succeeds on retrying. I am fine with the behavior if the test case fails in all attempts, so I am not looking to add "alwaysRun=true" on each dependent method.

Custom jasmine matchers and protractor

We've added a toHaveClass custom jasmine matcher and, in order to make it work, we had to add it to beforeEach() (with the help of this topic).

And, to follow the DRY principle and to avoid repeating the matcher definition in every beforeEach() in specs where toHaveClass is needed, we've added a beforeEach() block right into onPrepare():

onPrepare: function () {
    var jasmineReporters = require("jasmine-reporters");
    require("jasmine-expect");

    // ...

    // custom matchers
    beforeEach(function() {
        jasmine.addMatchers({
            toHaveClass: function() {
                return {
                    compare: function(actual, expected) {
                        return {
                            pass: actual.getAttribute("class").then(function(classes) {
                                return classes.split(" ").indexOf(expected) !== -1;
                            })
                        };
                    }
                };
            }
        });
    });
},

It actually works, but every time I see beforeEach() block inside the protractor config, I have a micro-depression and a strong feeling it is not a good place to define matchers.

The Question:

Is there a better way or place to have custom matchers defined?

PowerMockito.whenNew isn't working

I've got a class:

package test;

public class ClassXYZ {
    private final String message;

    public ClassXYZ() {
        this.message = "";
    }

    public ClassXYZ(String message) {
        this.message = message;
    }

    @Override
    public String toString() {
        return "ClassXYZ{" + message + "}";
    }
}

and a test:

package test;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.powermock.api.mockito.PowerMockito;
import org.powermock.modules.junit4.PowerMockRunner;

@RunWith(PowerMockRunner.class)
public class MockClassXYZ {

    @Test
    public void test() throws Exception {
        PowerMockito.whenNew(ClassXYZ.class).withNoArguments().thenReturn(new ClassXYZ("XYZ"));

        System.out.println(new ClassXYZ());
    }
}

but it still creates a real class and prints:

ClassXYZ{}

What am I doing wrong?

P.S. Maven deps:

<dependencies>
    <dependency>
        <groupId>org.powermock</groupId>
        <artifactId>powermock-module-junit4</artifactId>
        <version>1.5.6</version>
        <scope>test</scope>
    </dependency>

    <dependency>
        <groupId>org.powermock</groupId>
        <artifactId>powermock-api-mockito</artifactId>
        <version>1.5.6</version>
        <scope>test</scope>
    </dependency>
</dependencies>

Monkey testing for Blackberry

Is there any tools for monkey-testing Blackberry OS 10 mobile applications just like "Monkey Runner" in Android applications?

Thank you.

Crashed When testing my app (using swift) on a physical device, Thread 1: EXC_BREAKPOINT (code=1, subcode=0x120021088)

I have a problem when running my app on a physical device, but it works fine when I run that on a simulator.

Error:

Thread 1: EXC_BREAKPOINT (code=1, subcode=0x120021088)

Logs:

dyld: Library not loaded: @rpath/libswiftCore.dylib Referenced from: /private/var/mobile/Containers/Bundle/Application/0E769751-670F-4E12-90D3-A51C3DC14793/http://ift.tt/1HYAMYm Reason: no suitable image found. Did find: /private/var/mobile/Containers/Bundle/Application/0E769751-670F-4E12-90D3-A51C3DC14793/http://ift.tt/1HYAMYq: mmap() error 1 at address=0x1001F8000, size=0x0015C000 segment=__TEXT in Segment::map() mapping /private/var/mobile/Containers/Bundle/Application/0E769751-670F-4E12-90D3-A51C3DC14793/http://ift.tt/1HYAMYq

And here is the screenshot: screen shot

P.S. I did't put my code up here because I just ran it on my iPhone and it worked fine, after that I didn't change a single line of my code.

P.P.S Anyone got this error before?

Get content of email sent during command tests

During my tests I call some commands which send emails. I can display the number of emails sent with the following command:

$output->writeln(
    $spool->flushQueue(
        $this->getContainer()->get('swiftmailer.transport.real')
    )
);

The Symfony2 documentation explains how to get email content by using the profiler during a Web test (also explained here on Stack Overflow), but I don't know how to do the same thing when there is no Web request.

I used the code provided in these links:

public function tesEmailCommand()
{
    // load data fixtures

    // http://ift.tt/1silozM
    $client = static::createClient();
    // Enable the profiler for the next request (it does nothing if the profiler is not available)
    $client->enableProfiler();

    /** @var \Symfony\Bundle\FrameworkBundle\Console\Application $application */
    // inherit from the parent class
    $application = clone $this->application;

    $application->add(new EmailCommand());
    $command = $application->find('acme:emails');
    $commandTester = new CommandTester($command);

    $commandTester->execute(array(
        'command' => 'acme:emails'
    ));

    $display = $commandTester->getDisplay();

    $this->assertContains('foo', $display);

    // http://ift.tt/1silozM
    $mailCollector = $client->getProfile()->getCollector('swiftmailer');

    // Check that an email was sent
    $this->assertEquals(1, $mailCollector->getMessageCount());

    $collectedMessages = $mailCollector->getMessages();
    $message = $collectedMessages[0];

    // Asserting email data
    $this->assertInstanceOf('Swift_Message', $message);
    $this->assertEquals(
        'You should see me from the profiler!',
        $message->getBody()
    );
}

It returns this error:

Argument 1 passed to Symfony\Component\HttpKernel\Profiler\Profiler::loadProfileFromResponse() must be an instance of Symfony\Component\HttpFoundation\Response, null given, called in .../vendor/symfony/symfony/src/Symfony/Bundle/FrameworkBundle/Client.php on line 72 and defined .../vendor/symfony/symfony/src/Symfony/Component/HttpKernel/Profiler/Profiler.php:81 .../vendor/symfony/symfony/src/Symfony/Bundle/FrameworkBundle/Client.php:72 .../src/ACME/MyBundle/Tests/Command/ResumeEmailsTest.php:94

The error seems logical because there is no response since there's no request.

I use Symfony 2.3.30.

How to click OK in alert box using protractor

I am using AngularJS and I want to delete a link, in such cases, an alert box appears to confirm the delete.

I am trying to do e2e test using protractor, how do I confirm in an alert box?

I tried:

browser.switchTo().alert().accept()

but it doesn't seem to work.

Is there a provision in protractor for handling alert boxes?

nose2: run tests from an imported module

I generated and imported a module that contains a test that I want to run with nose2. Here is the code that creates and import the module:

import sys
import imp
import nose2


def import_code(code, name):
    module = imp.new_module(name)
    exec code in module.__dict__
    sys.modules[name] = module
    return module

code_to_test = ("""
def test_foo():
    print "hello test_foo"
""")

module_to_test = import_code(code_to_test, 'moduletotest')

# now how can I tell nose2 to run the test?

Tigase load testing with Tsung. Registration conflict 409

I am testing Tigase by using Tsung.

My first test script would be only registering users on the Tigase server. But I have a strange problem of Tigase duplicating register requests for some user IDs.

Take a look at the tsung.dump file below.

The register request for user 43-tsung-user-2 is repeated twice. First time it is successful, the second time Tigase returns conflict error 409, meaning the user is already registered.

NewClient:1438077277.663192:1
load:1
Send:1438077277.703507:<0.89.0>:<?xml version='1.0'?><stream:stream  id='1' to='ubuntu' xmlns='jabber:client' version='1.0' xmlns:stream='http://ift.tt/wY9ouf'>
Recv:1438077277.71206:<0.89.0>:<?xml version='1.0'?><stream:stream xmlns='jabber:client' xmlns:stream='http://ift.tt/wY9ouf' from='ubuntu' id='93e35376-1be1-413c-9285-2aa9558798d4' version='1.0' xml:lang='en'>
Recv:1438077277.717071:<0.89.0>:<stream:features><auth xmlns="http://ift.tt/11Vn6tH"/><register xmlns="http://ift.tt/wVz4qP"/><mechanisms xmlns="urn:ietf:params:xml:ns:xmpp-sasl"><mechanism>PLAIN</mechanism><mechanism>ANONYMOUS</mechanism></mechanisms><ver xmlns="urn:xmpp:features:rosterver"/><starttls xmlns="urn:ietf:params:xml:ns:xmpp-tls"/><compression xmlns="http://ift.tt/SqOxcK"><method>zlib</method></compression></stream:features>
Send:1438077280.718581:<0.89.0>:<iq id='2' type='set' ><query xmlns='jabber:iq:register'><username>43-tsung-user1</username><resource>tsung</resource><password>pass1</password></query></iq>
Recv:1438077280.726568:<0.89.0>:<iq xmlns="jabber:client" type="result" id="2"/>
Send:1438077282.719153:<0.89.0>:</stream:stream>
EndClient:1438077282.719198:1
load:0
NewClient:1438077293.46312:1
load:1
Send:1438077293.4815:<0.94.0>:<?xml version='1.0'?><stream:stream  id='3' to='ubuntu' xmlns='jabber:client' version='1.0' xmlns:stream='http://ift.tt/wY9ouf'>
Recv:1438077293.484589:<0.94.0>:<?xml version='1.0'?><stream:stream xmlns='jabber:client' xmlns:stream='http://ift.tt/wY9ouf' from='ubuntu' id='4edaf8c7-72a5-48a0-99dc-33e6a348b838' version='1.0' xml:lang='en'>
Recv:1438077293.488533:<0.94.0>:<stream:features><auth xmlns="http://ift.tt/11Vn6tH"/><register xmlns="http://ift.tt/wVz4qP"/><mechanisms xmlns="urn:ietf:params:xml:ns:xmpp-sasl"><mechanism>PLAIN</mechanism><mechanism>ANONYMOUS</mechanism></mechanisms><ver xmlns="urn:xmpp:features:rosterver"/><starttls xmlns="urn:ietf:params:xml:ns:xmpp-tls"/><compression xmlns="http://ift.tt/SqOxcK"><method>zlib</method></compression></stream:features>
Send:1438077296.490041:<0.94.0>:<iq id='4' type='set' ><query xmlns='jabber:iq:register'><username>43-tsung-user2</username><resource>tsung</resource><password>pass2</password></query></iq>
Recv:1438077296.502307:<0.94.0>:<iq xmlns="jabber:client" type="result" id="4"/>
Send:1438077298.496102:<0.94.0>:</stream:stream>
EndClient:1438077298.496152:2
load:0
NewClient:1438077303.492718:1
load:1
Send:1438077303.502446:<0.96.0>:<?xml version='1.0'?><stream:stream  id='5' to='ubuntu' xmlns='jabber:client' version='1.0' xmlns:stream='http://ift.tt/wY9ouf'>
Recv:1438077303.511868:<0.96.0>:<?xml version='1.0'?><stream:stream xmlns='jabber:client' xmlns:stream='http://ift.tt/wY9ouf' from='ubuntu' id='e3f918bc-e45d-4bb2-918d-62d85b93cec7' version='1.0' xml:lang='en'>
Recv:1438077303.515748:<0.96.0>:<stream:features><auth xmlns="http://ift.tt/11Vn6tH"/><register xmlns="http://ift.tt/wVz4qP"/><mechanisms xmlns="urn:ietf:params:xml:ns:xmpp-sasl"><mechanism>PLAIN</mechanism><mechanism>ANONYMOUS</mechanism></mechanisms><ver xmlns="urn:xmpp:features:rosterver"/><starttls xmlns="urn:ietf:params:xml:ns:xmpp-tls"/><compression xmlns="http://ift.tt/SqOxcK"><method>zlib</method></compression></stream:features>
Send:1438077306.517646:<0.96.0>:<iq id='6' type='set' ><query xmlns='jabber:iq:register'><username>43-tsung-user2</username><resource>tsung</resource><password>pass2</password></query></iq>
Recv:1438077306.524358:<0.96.0>:<iq xmlns="jabber:client" type="error" id="6"><query xmlns="jabber:iq:register"><username>43-tsung-user2</username><resource>tsung</resource><password>pass2</password></query><error type="cancel" code="409"><conflict xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/><text xml:lang="en" xmlns="urn:ietf:params:xml:ns:xmpp-stanzas">Unsuccessful registration attempt</text></error></iq>
Send:1438077308.520858:<0.96.0>:</stream:stream>
EndClient:1438077308.52091:3
load:0

My test should take 30 seconds, with users coming on 10 seconds intervals. Therefore 3 users should be created in the database. After the test finishes I can only see 2 users, which is also what tsung.dump is alredy saying.

Why is tsung repeating request for some users? Tsung behaves similar no matter what the load is. If i raise the load numbers I will receive the similar behavior. Most of the time the number of successfully registered users will be around the half of tsung generated user count.

Below is my tsung.xml

<?xml version="1.0"?>
<!DOCTYPE tsung SYSTEM "/usr/local/Cellar/tsung/1.5.1/share/tsung/tsung-1.0.dtd”>    
<tsung loglevel="debug" version="1.0" dumptraffic="true">
      <clients>
          <client host="localhost" use_controller_vm="true" maxusers="100000"></client>
      </clients>
      <servers>
          <server host="192.168.100.133" port="5222" type="tcp" weight="1"></server>
      </servers>
    <load>
        <arrivalphase phase="1" duration="30" unit="second">
            <users maxnumber="100000" interarrival="10" unit="second"></users>
        </arrivalphase>
    </load>
    <options>
       <option type="ts_jabber" name="global_number" value="100000"></option>
       <option type="ts_jabber" name="userid_max" value="100000" />
       <option type="ts_jabber" name="domain" value="ubuntu"></option>
       <option type="ts_jabber" name="username" value="43-tsung-user"></option>
       <option type="ts_jabber" name="passwd" value="pass"></option>
    </options>
    <sessions>
      <session probability="100" name="jabber-example" type="ts_jabber">

        <request>
          <jabber type="connect" ack="local"></jabber>
        </request>

        <thinktime value="3" random="false"></thinktime>

        <request>
          <jabber type="register" ack="no_ack" id="new"></jabber>
        </request>

        <thinktime value="2" random="false"></thinktime>

        <request>
          <jabber type="close" ack="no_ack"></jabber>
        </request>

      </session>
    </sessions>
    </tsung>

Python: How to check that del instruction called?

I'd like to test function:

#foo_module.py
def foo(*args, **kwargs):
   bar = SomeClass(*args, **kwargs)
   # actions with bar
   del bar

I chose to test the mock library. My test looks like:

@mock.patch('path.to.foo_module.SomeClass')
def test_foo(self, mock_class):
    foo()
    mock_class.assert_called_once_with()

But how I can check that 'del bar' executed? Call of mock_class.return_value.__del__ raises AttributeError.

Testing SMTP performance for bulk emails

I'm tasked with managing and improving email marketing for my new employer. They regularly send to a genuine double-opt-in list of over 100K recipients. They are currently using IIS SMTP for this - and it works really well - but there is no DKIM capability without paying for a commercial plugin or external service.

I recently set up hMail server on an alternative port and switched the marketing software over to it for a test delivery with DKIM signing. Technically all went well, but the delivery took several hours compared to several minutes using IIS. I tried various configurations of thread/session counts during the delivery, restarting the server and resuming the queue each time to ensure the settings were applied, but this only marginally improved the speed. We send the campaigns at specific times for the best open rates, so waiting this long for a delivery to complete is not an option.

I'd like to be able to repeatedly test the delivery speed of hMail (and other servers) without damaging/wasting a valuable campaign. Is there any way I can have hMail genuinely send thousands of emails in its own time, but somehow catch them so they are not delivered to the actual recipients - some kind of 'black-hole' method or application?

Thanks in advance