mercredi 31 décembre 2014

TESTING data in CSV files

I have a huge CSV file that I want to manually test the data before importing using SSIS package?


How I can test the data and number of columns in CSV files manually?


Any suggestion would be appreciated?


Test-Ng : Is there any value in having a test that has only 'always-run' annonatation?

In other words, will always-run annotation be of any use when it is not combined with dependsOnGroups or dependsOnMethods?


How to test excel's formula with ruby?

I am trying to test an xlsx file which has a lot of formulas. I tried roo's set function but it only change the current cell's value but don't eval other cells' formula which depend on the modified cell. Is there any way I can modify a cell and see its effect on other cells in ruby?


I tried other gems but they can only read or create a new excel file but not modify an existing excel file. Does anybody know anything in ruby that can help me modify excel in memory and re-eval the formulas which doesn't necessarily write to the hard drive.


Another question is about roo, if its set function can only change a cell's value without affecting any other cells even in the memory, then what can we use it for?


Is it possible to use Protractor to test the value of a variable not model-bound to the DOM?

I'm relatively new to Angular and new (today) to Protractor, so I'm not exactly sure how to ask this question - thus I not quite sure if there is a duplicate out there. Below is a very simplified version of a much larger, much more complex application we are developing, but the basic idea is the same.


Let's say I have a simple web page:



<input id="my-input" ng-model="myValue">
<button id="submit-button" ng-click="doSomething()">
Click Me
</button>


Controlled by a simplified angular app:



// some-angular-app.js

$scope.myValue = "";
$scope.computedValue = null;

$scope.doSomething = function() {
$scope.computedValue = "Hello World";
}


Essentially, when you click on the button, it triggers a function which manipulates variables in your app. In our case (as above), the variables (i.e. $scope.computedValue) are not bound to the DOM in any way - they are actually compiled and passed to a JSON request to be consumed by our API. However, I want to test those values -- something like:



// some-protractor-test.js

describe('form submission', function() {
it('should corretly set the computed value', function() {
browser.get('http://ift.tt/1ELseoZ');
element(by.css("#my-input")).sendKeys("Hello Input");
element(by.css("#submit-button")).click();

// ??? how to check that computedValue === "Hello World" ???
});
});


Is is possible to use Protractor to check the state of our data in this manner, or must all interaction with the Angular app be handled through DOM elements?


Will the verbose output from logcat in Android Studio hog RAM if left plugged in to test?

My ADB testing is on a device, VT-x is not available on my computer.


Do I have to unplug my test device each time I test a build or not worry about it?


How to determine if is disabled

Protractor has the nifty isEnabled() function for elements. Though it works great for <button>, it isn't doing the trick for <a> elements.


I can easily check the disabled or ng-disabled attribute, but is there a cleaner way?


Re-run protractor timeout or failed tests

I didn't find any reference for the possibility to re-run failed tests (using protractor). Do you know how to do that? It would be great because I have a lot of tests and I don't like to run again all the tests to verify whether the previously failed tests passes or not.


Do anyone have experience with it? It would like this:



  • run all tests

  • collect failed tests and run those again (maybe I can set somehow the re-try limit like 2 or 3 times)

  • show the result


Ant build class files with TestNG using textiles

I am using Ant to build my project and am encountering a problem dealing with running my tests with text files when I use TestNG. Below is my code that contains two targets that compile and run my test code:



<target name="compile-tests" depends="compile"
description="Compile the project test source-code." >

<mkdir dir="${testBuildDir}" />

<javac destdir="${testBuildDir}" classpathref="test.path"
debug="${java.debug}" optimize="${java.opt}" deprecation="on">
<src path="${testSrcDir}" />
</javac>

<!-- Copies text testing files over -->
<copy todir="${testBuildDir}">
<fileset dir="${testSrcDir}">
<include name="**/*.txt" />
</fileset>
</copy>
</target>

<target name="test" depends="compile-tests"
description="Runs all the unit tests">
<testng suitename="boggle-tests" outputdir="${testResultsDir}">
<classpath>
<path refid="test.path" />
<pathelement path="${testBuildDir}" />
</classpath>
<classfileset dir="${testBuildDir}" />
</testng>
</target>


Also, I have provided below a list of properties used in my xml file:



<property name="srcDir" location="src" />
<property name="libDir" location="lib" />

<property name="buildDir" location="build"/>
<property name="buildClassesDir" location="${buildDir}/classes"/>
<property name="javaDocDir" location="${buildDir}/javadoc"/>

<property name="testSrcDir" location="test"/>
<property name="testBuildDir" location="${buildDir}/tests"/>
<property name="testResultsDir" location="${buildDir}/results"/>

<property name="testNGFile" location="${libDir}/testng-6.8.jar" />


So I have contained my testing source code and my testing text files used by the source code in the ${testBuildDir}. My compile-tests target simply compiles the .java files and creates .class files in the destination as well as copying my testing .txt files.


Then my test target just runs all these .class files using the TestNG framework. The problem I am having is that when I run the test target, my test code fails because it says that it could not find the .txt files used for testing purposes. However, I don't understand why this happens since my compile-tests target clearly works as expected and includes the .txt test files.


I have been struggling on this for too long and really need so help.


how do we read the column headers which can be altered in robot framework?

I have a very unique problem here. I'm testing a web page using robot framework. There is a table in the webpage with 19 columns and it can be made hidden by selecting only the desired columns. The order in which the columns should appear can also be changed.


I'm able to read the table contents. The challenge here is that I need to keep track of the order in which the columns does appear. Because even though the column header is displayed in the page it isn't inside the table header tag . Its always empty. There is a separate container which contains all the header tags ( all are ). I'm not sure about how are they making the table headers displayed on the header portion of the table. here is the sample code



<th class="x-grid-col-resizer-actioncolumn-1150" style="width: 75px; height: 0px;"></th> // <-- Empty


<div id="actioncolumn-1150-titleEl" class="x-column-header-inner" style="height: 32px; padding-top: 8px;"><span id="actioncolumn-1150-textEl" class="x-column-header-text">Action</span><div id="actioncolumn-1150-triggerEl" class="x-column-header-trigger"></div></div> <-- one header inside the header container.


Here element id's are dynamic. I need to keep track of the order in which the columns does appear. Can anyone help me understand how it can be done ?


getting error in behat - Call to undefined method FeatureContext:visit

Getting the following error:


Scenario:


Feature: Admin In order to login admin


Scenario: User fills out Given I am on "admin/login" And I fill in "username" with "sivaganesh" And I fill in "password" with "demodemo" Then I should see "Hello Admin"






In order to login admin


Scenario: User fills out # features/admin.feature:4 PHP Fatal error: Call to undefined method FeatureContext::visit() in /home/vagrant/Sites/moore/features/bootstrap/FeatureContext.php on line 30 PHP Stack trace: PHP 1. {main}() /home/vagrant/Sites/moore/vendor/behat/behat/bin/behat:0 PHP 2. Symfony\Component\Console\Application->run() /home/vagrant/Sites/moore/vendor/behat/behat/bin/behat:31 PHP 3. Behat\Testwork\Cli\Application->doRun() /home/vagrant/Sites/moore/vendor/symfony/console/Symfony/Component/Console/Application.php:124 PHP 4. Symfony\Component\Console\Application->doRun() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Cli/Application.php:102 PHP 5. Symfony\Component\Console\Application->doRunCommand() /home/vagrant/Sites/moore/vendor/symfony/console/Symfony/Component/Console/Application.php:193 PHP 6. Symfony\Component\Console\Command\Command->run() /home/vagrant/Sites/moore/vendor/symfony/console/Symfony/Component/Console/Application.php:889 PHP 7. Behat\Testwork\Cli\Command->execute() /home/vagrant/Sites/moore/vendor/symfony/console/Symfony/Component/Console/Command/Command.php:252 PHP 8. Behat\Testwork\Tester\Cli\ExerciseController->execute() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Cli/Command.php:63 PHP 9. Behat\Testwork\Tester\Cli\ExerciseController->testSpecifications() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Tester/Cli/ExerciseController.php:108 PHP 10. Behat\Testwork\EventDispatcher\Tester\EventDispatchingExercise->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Tester/Cli/ExerciseController.php:146 PHP 11. Behat\Testwork\Tester\Runtime\RuntimeExercise->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/EventDispatcher/Tester/EventDispatchingExercise.php:70 PHP 12. Behat\Testwork\EventDispatcher\Tester\EventDispatchingSuiteTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Tester/Runtime/RuntimeExercise.php:71 PHP 13. Behat\Testwork\Hook\Tester\HookableSuiteTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/EventDispatcher/Tester/EventDispatchingSuiteTester.php:72 PHP 14. Behat\Testwork\Tester\Runtime\RuntimeSuiteTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Hook/Tester/HookableSuiteTester.php:73 PHP 15. Behat\Behat\EventDispatcher\Tester\EventDispatchingFeatureTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Tester/Runtime/RuntimeSuiteTester.php:63 PHP 16. Behat\Behat\Hook\Tester\HookableFeatureTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Behat/EventDispatcher/Tester/EventDispatchingFeatureTester.php:71 PHP 17. Behat\Behat\Tester\Runtime\RuntimeFeatureTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Behat/Hook/Tester/HookableFeatureTester.php:72 PHP 18. Behat\Behat\EventDispatcher\Tester\EventDispatchingScenarioTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Behat/Tester/Runtime/RuntimeFeatureTester.php:83 PHP 19. Behat\Behat\Hook\Tester\HookableScenarioTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Behat/EventDispatcher/Tester/EventDispatchingScenarioTester.php:103 PHP 20. Behat\Behat\Tester\Runtime\RuntimeScenarioTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Behat/Hook/Tester/HookableScenarioTester.php:74 PHP 21. Behat\Behat\Tester\StepContainerTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Behat/Tester/Runtime/RuntimeScenarioTester.php:76 PHP 22. Behat\Behat\EventDispatcher\Tester\EventDispatchingStepTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Behat/Tester/StepContainerTester.php:59 PHP 23. Behat\Behat\Hook\Tester\HookableStepTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Behat/EventDispatcher/Tester/EventDispatchingStepTester.php:73 PHP 24. Behat\Behat\Tester\Runtime\RuntimeStepTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Behat/Hook/Tester/HookableStepTester.php:74 PHP 25. Behat\Behat\Tester\Runtime\RuntimeStepTester->testDefinition() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Behat/Tester/Runtime/RuntimeStepTester.php:73 PHP 26. Behat\Testwork\Call\CallCenter->makeCall() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Behat/Tester/Runtime/RuntimeStepTester.php:125 PHP 27. Behat\Testwork\Call\CallCenter->handleCall() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Call/CallCenter.php:80 PHP 28. Behat\Testwork\Call\Handler\RuntimeCallHandler->handleCall() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Call/CallCenter.php:125 PHP 29. Behat\Testwork\Call\Handler\RuntimeCallHandler->executeCall() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Call/Handler/RuntimeCallHandler.php:54 PHP 30. call_user_func_array:{/home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Call/Handler/RuntimeCallHandler.php:99}() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Call/Handler/RuntimeCallHandler.php:99 PHP 31. FeatureContext->iAmOn() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Call/Handler/RuntimeCallHandler.php:99


Fatal error: Call to undefined method FeatureContext::visit() in /home/vagrant/Sites/moore/features/bootstrap/FeatureContext.php on line 30


Call Stack: 0.0009 226968 1. {main}() /home/vagrant/Sites/moore/vendor/behat/behat/bin/behat:0 0.4982 3792304 2. Symfony\Component\Console\Application->run() /home/vagrant/Sites/moore/vendor/behat/behat/bin/behat:31 0.5856 4173464 3. Behat\Testwork\Cli\Application->doRun() /home/vagrant/Sites/moore/vendor/symfony/console/Symfony/Component/Console/Application.php:124 2.4532 11607912 4. Symfony\Component\Console\Application->doRun() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Cli/Application.php:102 2.4532 11608784 5. Symfony\Component\Console\Application->doRunCommand() /home/vagrant/Sites/moore/vendor/symfony/console/Symfony/Component/Console/Application.php:193 2.4533 11609256 6. Symfony\Component\Console\Command\Command->run() /home/vagrant/Sites/moore/vendor/symfony/console/Symfony/Component/Console/Application.php:889 2.4535 11612224 7. Behat\Testwork\Cli\Command->execute() /home/vagrant/Sites/moore/vendor/symfony/console/Symfony/Component/Console/Command/Command.php:252 2.4708 11694560 8. Behat\Testwork\Tester\Cli\ExerciseController->execute() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Cli/Command.php:63 2.4971 11758920 9. Behat\Testwork\Tester\Cli\ExerciseController->testSpecifications() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Tester/Cli/ExerciseController.php:108 2.5340 11801008 10. Behat\Testwork\EventDispatcher\Tester\EventDispatchingExercise->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Tester/Cli/ExerciseController.php:146 2.5340 11801240 11. Behat\Testwork\Tester\Runtime\RuntimeExercise->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/EventDispatcher/Tester/EventDispatchingExercise.php:70 2.6797 12008104 12. Behat\Testwork\EventDispatcher\Tester\EventDispatchingSuiteTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Tester/Runtime/RuntimeExercise.php:71 2.6797 12008152 13. Behat\Testwork\Hook\Tester\HookableSuiteTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/EventDispatcher/Tester/EventDispatchingSuiteTester.php:72 2.6798 12008320 14. Behat\Testwork\Tester\Runtime\RuntimeSuiteTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Hook/Tester/HookableSuiteTester.php:73 2.7868 12163840 15. Behat\Behat\EventDispatcher\Tester\EventDispatchingFeatureTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Tester/Runtime/RuntimeSuiteTester.php:63 2.7868 12163888 16. Behat\Behat\Hook\Tester\HookableFeatureTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Behat/EventDispatcher/Tester/EventDispatchingFeatureTester.php:71 2.7868 12164096 17. Behat\Behat\Tester\Runtime\RuntimeFeatureTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Behat/Hook/Tester/HookableFeatureTester.php:72 2.8154 12228624 18. Behat\Behat\EventDispatcher\Tester\EventDispatchingScenarioTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Behat/Tester/Runtime/RuntimeFeatureTester.php:83 2.8154 12228672 19. Behat\Behat\Hook\Tester\HookableScenarioTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Behat/EventDispatcher/Tester/EventDispatchingScenarioTester.php:103 2.8154 12228784 20. Behat\Behat\Tester\Runtime\RuntimeScenarioTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Behat/Hook/Tester/HookableScenarioTester.php:74 2.8154 12229128 21. Behat\Behat\Tester\StepContainerTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Behat/Tester/Runtime/RuntimeScenarioTester.php:76 2.8405 12272376 22. Behat\Behat\EventDispatcher\Tester\EventDispatchingStepTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Behat/Tester/StepContainerTester.php:59 2.8405 12272424 23. Behat\Behat\Hook\Tester\HookableStepTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Behat/EventDispatcher/Tester/EventDispatchingStepTester.php:73 2.8405 12272488 24. Behat\Behat\Tester\Runtime\RuntimeStepTester->test() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Behat/Hook/Tester/HookableStepTester.php:74 2.8713 12364856 25. Behat\Behat\Tester\Runtime\RuntimeStepTester->testDefinition() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Behat/Tester/Runtime/RuntimeStepTester.php:73 2.8882 12391240 26. Behat\Testwork\Call\CallCenter->makeCall() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Behat/Tester/Runtime/RuntimeStepTester.php:125 2.8883 12391944 27. Behat\Testwork\Call\CallCenter->handleCall() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Call/CallCenter.php:80 2.8883 12392040 28. Behat\Testwork\Call\Handler\RuntimeCallHandler->handleCall() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Call/CallCenter.php:125 2.8884 12409808 29. Behat\Testwork\Call\Handler\RuntimeCallHandler->executeCall() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Call/Handler/RuntimeCallHandler.php:54 2.8884 12410656 30. call_user_func_array:{/home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Call/Handler/RuntimeCallHandler.php:99}() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Call/Handler/RuntimeCallHandler.php:99 2.8884 12411032 31. FeatureContext->iAmOn() /home/vagrant/Sites/moore/vendor/behat/behat/src/Behat/Testwork/Call/Handler/RuntimeCallHandler.php:99





ContextFeature.php File



use Behat\Behat\Context\Context; use Behat\Behat\Context\SnippetAcceptingContext; use Behat\Gherkin\Node\PyStringNode; use Behat\Gherkin\Node\TableNode;


/** * Defines application features from the specific context. */ class FeatureContext implements Context, SnippetAcceptingContext { /** * Initializes context. * * Every scenario gets its own context instance. * You can also pass arbitrary arguments to the * context constructor through behat.yml. */ public function __construct() { }



/**
* @Given I am on :arg1
*/
public function iAmOn($arg1)
{
$this->visit('GET', '/');
#throw new PendingException();
}

/**
* @Given I fill in :arg1 with :arg2
*/
public function iFillInWith($arg1, $arg2)
{
throw new PendingException();
}

/**
* @Then I should see :arg1
*/
public function iShouldSee($arg1)
{
throw new PendingException();
} }




"behat/behat": "3.0.@dev", "behat/mink": "1.6.@dev", "behat/mink-extension": "2.0.@dev", "behat/mink-goutte-driver": "1.1.@dev"


Boolean disjunction (OR) using mocha and chai

I have the following scenario in which I have to check that an URL was build correctly provided some query arguments. I do not expect the system to apply an specific order in the rendered URL, so I came with the following test case which I expected to work:



it('test that url is built correctly', function () {
var args = {
arg1: 'value1',
arg1: 'value2'
};

var rendered_url = render_url(args);

expect(rendered_url).to.equal('/my/url?arg1=value1&arg2=value2')
.or.to.equal('/my/url?arg2=value2&arg1=value1')
;
});


I was pretty surprised to the or chain to not exists as it makes the statement construction process tidy and cozy.


I know I can workaround this in many ways (for example, using satisfy), but I wonder:



  • Whether I cannot find the pattern to achieve in a similar way what I want in the documentation(I have read it thoroughly)...

  • ... or whether there exists a good reason to not include this construction in chai...

  • ... and whether there exists any other library or script that provides that feature.


How to test data of an XML file

How to test the data appearing in an XML file?


Can someone please guide me, how should I write test cases for a complex XML file?


I have to test an XML file which has various child-subchild elements?


there are many elements which have required attribute, and optional attribute.


How should I test the data generated from application is correct?


Any suggestion.


Testing unit tests' helper methods

As I am writing tests, some of them have a lot of logic in them. Most of this logic could easily be unit tested, which would provide a higher level of trust in the tests.


I can see a way to do this, which would be to create a class TestHelpers, to put in /classes, and write tests for TestHelpers along with the regular tests.


I could not find any opinion on such a practice on the web, probably because the keywords to the problem are tricky ("tests for tests").


I am wondering whether this sounds like good practice, whether people have already done this, whether there is any advice on that, whether this points to bad design, or something of the sort.


I am running into this while doing characterization tests. I know there are some frameworks for it, but I am writing it on my own, because it's not that complicated, and it gives me more clarity. Also, I can imagine that one can easily run into the same issue with unit tests.


To give an example, at some point I am testing a function that connects to Twitter's API service and retrieves some data. In order to test that the data is correct, I need to test whether it's a json encoded string, whether the structure matches twitter's data structure, whether each value has the correct type, etc. The function that does all these checks with the retrieved data would typically be interesting to test on its own.


Any idea or opinion on this practice ?


PHPUnit test classes with camel case or underscore

When writing test cases inthe xUnit style that PHPUnit follows it seems that everyone follows the camel case convention for function names:



public function testObjectHasFluentInterface()
{
// ...
}


I have been naming my methods with a more "eloquent" PHPSpec style:



public function test_it_has_a_fluent_interface()
{
// ...
}


Will this style create problems for me in the future? Personally I find it vastly more readable and easy to come back to.


Mocking a function call using Mockito throw up InvalidUseOfMatchersException

I am trying to use Mockito to mock a function call.


I have a method runQueryForDataWindow in a class called QueryBuilder. The method runQueryForDataWindow takes two arguments - 1) a string 2) an instance of class FetchWindow



runQueryForDataWindow(String str, FetchWindow fetchWindow)


Here is how my test case for mocking looks like



final QueryBuilder queryBuilder = mock(QueryBuilder.class);

Mockito.when(queryBuilder.runQueryForDataWindow(anyString(),
any(FetchWindow.class))).thenReturn(queryResult);


I want to return queryResult irrespective of the function arguments.


When I ran this, the test fails with org.mockito.exceptions.misusing.InvalidUseOfMatchersException


I guess there is something wrong with the way I am trying to pass in the instance of FetchWindow. Appreciate any leads here.


Test Automation tool for Metro-Style JavaScript Application

I want to automate tests on an Metro-Style JavaScript based application on windows tablet.


But I can not find any automation tool for windows Metro-Style JavaScript application.


Are there any Automation tools available for Metro-Style JavaScript based application? Or are there any other way around to automate tests for this kind of application?


in particular testing type: regression and functionality testing


NIST randomness tests in R

i've recently known about NIST 15 tests and random numbers. Since NIST packages tests are written in C,i just want to know if there are NIST 15 tests packages written in R?? or where can i convert C packages into R ??


mardi 30 décembre 2014

Qunit beforeEach, afterEach - async

Since start(), stop() will be removed in Qunit 2.0, what is the alternative for async setups and teardowns via the beforeEach, afterEach methods? For instance, if I want the beforeEach to wait for a promise to be finished?


Converting practice project to use data driven principles, any other suggestions

So I used JUnit, Selenium, and Cucumber to make a basic web app test and I had some questions about how I might be able to improve it. This is just a personal practice type thing. Here is what I have so far:


CucumberRunner class:



package cucumber;

import org.junit.runner.RunWith;

import cucumber.api.CucumberOptions;
import cucumber.api.junit.Cucumber;

@RunWith(Cucumber.class)
@CucumberOptions(
format = {"pretty", "json:target/"},
features = {"src/cucumber/"}
)

public class CucumberRunner {

}


Feature File:



Feature: To test budget calculator form works when there are no errors

Scenario: Check that form is validated when there are no errors
Given I am on budget calculator site
When I populate the budget fields
And click the calculate button
Then print the calculated total
And close the browser


And my step definitions:



package cucumber.features;

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;

import cucumber.api.java.en.And;
import cucumber.api.java.en.Given;
import cucumber.api.java.en.Then;
import cucumber.api.java.en.When;

public class StepDefinitions {

WebDriver driver = null;

@Given("^I am on budget calculator site$")
public void shouldNavigateToBudgetCalculatorSite() throws Throwable {
try {
driver = new FirefoxDriver();
driver.navigate().to("http://frugalliving.about.com/library/Budget_Calculator/"
+ "bl_budgetcalculator.htm");
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}

@When("^I populate the budget fields$")
public void shouldPopulateBudgetFields() throws Throwable {
try {
driver.findElement(By.name("in1")).sendKeys("3500.00");
driver.findElement(By.name("in2")).sendKeys("1000.00");
driver.findElement(By.name("ex01")).sendKeys("800.00");
driver.findElement(By.name("ex02")).sendKeys("400.00");
driver.findElement(By.name("ex03")).sendKeys("50.00");
driver.findElement(By.name("ex04")).sendKeys("0.00");
driver.findElement(By.name("ex05")).sendKeys("80.00");
driver.findElement(By.name("ex06")).sendKeys("150.00");
driver.findElement(By.name("ex07")).sendKeys("200.00");
driver.findElement(By.name("ex08")).sendKeys("0.00");
driver.findElement(By.name("ex09")).sendKeys("50.00");
driver.findElement(By.name("ex10")).sendKeys("80.00");
driver.findElement(By.name("ex11")).sendKeys("0.00");
driver.findElement(By.name("ex12")).sendKeys("40.00");
driver.findElement(By.name("ex13")).sendKeys("0.00");
driver.findElement(By.name("ex14")).sendKeys("0.00");
driver.findElement(By.name("ex15")).sendKeys("0.00");
driver.findElement(By.name("ex16")).sendKeys("0.00");
driver.findElement(By.name("ex17")).sendKeys("0.00");
driver.findElement(By.name("ex18")).sendKeys("200.00");
driver.findElement(By.name("ex19")).sendKeys("50.00");
driver.findElement(By.name("ex20")).sendKeys("0.00");
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}

}

@And("^click the calculate button$")
public void shouldClickTheCalculateButton() throws Throwable {
try {
driver.findElement(By.cssSelector("input[value=\"Calculate\"]")).click();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}

@Then("^print the calculated total$")
public void print_the_calculated_total() throws Throwable {
try {
String total = driver.findElement(By.name("res")).getAttribute("value");
int value = Integer.parseInt(total);
System.out.println("The calculated total is: " + value);
System.out.println(" ");
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}


@And("^close the browser$")
public void shouldCloseTheBrowser() throws Throwable {
try {
Thread.sleep(2000);
driver.quit();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}



}


(If you try and run any of that, obviously you'd need the jars for cucumber, selenium, and junit, plus firefox installed). Okay so a couple questions:




  1. When I am populating the fields, instead of hardcoding 1 value in each, I'd like to set it up to be data driven through excel or MySQL (what word would I use to describe that? would robust be accurate there? or maybe versatile).




  2. Also when populating the fields, that code looks cumbersome, and I'm trying to figure out a way I could use a loop to populate for each field, and what type of loop it would be?




  3. Being new to automation (QA Engineer Level I intern), and testing in general, is there another/better method or tool to use if I wanted to stick with Java for automation? My job had me using QTP and VB Script, which was really powerful, for a little while to kind of introduce this to me, but I'd rather stick with Java because Java is where I want to focus (and they're giving me the option, so it's not like I'm being insubordinate with them)




Thanks in advance!


EDIT:


If you do decide to try and run this, these are the jars in the build path:



cucumber-core-1.1.5.jar
cucumber-html-0.2.3.jar
cucumber-java-1.1.5.jar
cucumber-junit-1.1.5.jar
cucumber-jvm-dep-1.0.3.jar
gherkin-2.12.1.jar
hamcrest-all-1.3.jar
junit-4.11.jar
selenium-server-standalone-2.44.0.jar


As an aside everything currently works perfect, so just clarifying that I don't have errors or anything like that. More so just looking for tools, or practices that I may be missing or not aware of in general, that I could improve this with. And again, thanks in advance!


python unittest where does the "X tests run" number come from?


def Tests(unittest.TestCase):

def test_one(self):
self.assertEqual(a,1)

def test_two(self):
self.assertEqual(b,2)
assert c == 3
assert d == 4

def test_three(self):
self.assertEqual(e,5)
assert f ==6


I am getting 2 tests run whereas clearly I have three test functions and 6 asserts. Are my asserts not getting tested?


Gradle dependency for tests only

I have a java project A and B.


For compile there is no dependency from A to B.


For testing, tests of A have a dependency to B.


Adding this to build.gradle of project A did not solve it:




dependencies {
...
testCompile project(':B')
}


Note: What I am trying to accomplish here is to write a test for a class in A(say ClassA) that requires an interface IX. I want to test ClassA with an implementation in project B(say ClassB implements IX), so actually this is kinda integration test.


I would appreciate any pointers for testing two(or more) projects that have no compile dependency but probable runtime dependency(for Java,Eclipse,Gradle).


relation "branches" does not exist pgbench-tools and postgresql database scale returned ""

I´ve installed postgresql 9.1 on ubuntu 12.04 with pgpoolII-3.3.3 and pgPoolAdmin.


I´m trying to make a test with pgbench-tools to measure the performance of postgresql.


So I move to the directory where is pgbench-tools and configure the config file.


I try to execute this order:


sudo -u postgres ./runset


After this it appears a message "Removing old pgbench tables"


First error message (seems not to be important) is: ERROR: table "accounts does not exist"


After this it appears a message: VACUUM creating new pgbench tables


After this


creating tables 10000 tuples done 20000 tuples done ... 100000 tuples done ... vacuum...done. Run set #1 of 2 with 2 clients scale=1 Running tests using: psql -h localhost -U postgres -p 5432 pgbench Storing results using: psql -h localhost -U postgres -p 5432 pgbench


And after this it comes "the crash":


ERROR: relation "branches" does not exist LINE 1: select count(*) for branches ERROR: Attempt to determine database scale returned "", aborting


it´s maybe and stupid issue and I´m not being able to solve it as i don´t have a high level of knowledge on those systems.


Any idea about what to try?


Is it possible to view/change/test against the variables in a python script that's being run from another script?

I have the following code as part of a 'whole application' test framework (as opposed to, say, Unit testing):



with open("test.py") as f:
#Open, read, compile, execute the code
code = compile(f.read(), "test.py", 'exec')
exec(code)


Is there any way for me, the programmer in the file calling that code, to access the variables within that code?


Protractor click link and compare server response with file

I'm building a system where users upload files. I've managed to test the file uploading with Protractor, but now I need to verify that when requesting the raw file from the server the response is the same as the uploaded file.


A user downloads the file by clicking a link, which triggers a normal GET request for the file. Since I'm dealing with plain text files, the downloaded file is not served as an attachment but instead displayed in the browser. The relevant PHP code is:



header('Content-Type: text/plain');
header('Content-Length: ' . $file->getSize());
header('Content-Disposition: filename="file.txt"');

$file->openFile('rb')->fpassthru();


I've got two problems:



  1. How do I make protractor wait until the entire file has been downloaded?

  2. How do I compare the downloaded file to what I uploaded?


Here's what I got so far:



var path = require('path');

function uploadFile(filename) {
var absolutPath = path.resolve(__dirname, filename);
$('input#fileupload').sendKeys(absolutPath);
}

describe('Download', function() {

it('should be exactly the same as an uploaded text file', function() {
uploadFile('data/2.txt');

browser.wait(function() {
return element(by.id('download-button')).isPresent();
});

element(by.id('download-button')).click();

// Wait until page is loaded
// Compare uploaded file with downloaded file
});
});

Base class not finding app settings

This is somewhat of a general question but I haven't found much by googling it. I have a test framework that sets up an environment for testing purposes. My application consumes this framework through a reference path and runs manual tests just fine. However, once I ask the build server to run my test the framework complains it cannot find any of my app settings. The app.config file sits in my testing project for my application and I am sure it exists in the correct bin folder on my build server. I'm doing this in a C# .NET environment.


casperjs test written in coffeescript hangs

recently I tried to use CoffeeScript for CasperJS tests. So this code below doesn't throw any error, and seems to hang every time I fire it up in CLI.



casper = require("casper")
.create
verbose: true,
logLevel: "debug"

casper_utils = require("utils")
colorizer = require("colorizer").create 'Colorizer'

common_link = "http://0.0.0.0:6543/"

landing_pages = ['g',
'em',
'm1',
'm4',
'mv4',
'mv5',
'mp',
'm2',
'm3',
'rp',
'rc']

reg_hash = '#reg'
reg_modal = '.registration'

pay_hash = '#pay'
pay_modal = '.payment'

checkRegVisibility = ->
@test.assertExists reg_modal
@test.assertVisible reg_modal
@test.assertNotVisible pay_modal

checkPayVisibility = ->
@test.assertExists pay_modal
@test.assertVisible pay_modal
@test.assertNotVisible reg_modal

casper.on 'run.start', ->
@.log 'Casper started'

casper.on 'error', ->
@.log 'error'

casper.on 'http.status.404', ->
@.log '404'

casper.test.begin = ('Initial test', landing_pages.length * 3, suite(test)) ->
landing_pages.forEach (lp, index) ->
casper.start common_link+"?lp="+lp, ->
casper.echo lp
checkRegVisibility()
casper.then common_link+"?lp="+lp+reg_hash, ->
casper.echo lp
checkRegVisibility()
casper.run, ->
test.done()

casper.exit()


Also, is that possible to use JS2Coffee with casperjs tests


lundi 29 décembre 2014

add a variable to access across the solution c#

this might be a simple question but i could not find a way out through this , wanted some help


I have 3 common libraries in one project and a testing project( Module wise here we are testing hardware so modules differ from time to time but interaction libraries are same ) i wanted to add a variable which can be used across the libraries and this testing project.


Example :


i have


Testing Project -> Library1-> library2 -> library3 -> Hardware


response will be sent from hardware


one of the requirement that i have got is to define common variable in solution and use it across the libraries and testing project


i have tried to define app config but it is limited to one library and cannot be used in the other library


How do I save the web service response to the same excel sheet I extracted the data from?

For example, the given sample HP Flights SampleAppData.xls and using the CreateFlightOrder, we can link the data to the test functions and get a OrderNumber and Price response from the Web Service. And in the SampleAppData.xls Input tab, we can see that there is a empty column of OrderNumber.


So here is my question, is there any ways that I can take the OrderNumber response and fill the empty column in SampleAppData.xls?


My point to do this is because, let's say I have many test cases to do and will take days, and today I do this certain test and I would need the result of today for the next day's test. Although I know that the responses are saved in the result but it beats the point of automation if I am required to check the response for each and every test cases?


Why do the Theano tests fail with many "KnownFailureTest"s?

Theano is failing it's tests when I do:



python -c "import theano; theano.test();"


If these are known failures, shouldn't it still pass? IE when I test other libraries, KnownFailures sometimes trigger, but the overall test still passes with "OK" (but will still note the KnownFails and Skipped tests).


My guess is this is ok, and the test really is "passing", but since I'm doing a fresh install following the deeplearning.net tutorials, and I'm getting this error, I assume others might have this question as well, and a search on Google, and SO, isn't really helpful.


Forgive the error-code-dump, I am sure no one will need to read all through this, but it's here for reference if someone else has this question. Here are the errors at the end of the tests:



======================================================================
ERROR: test_none (theano.compile.tests.test_function_module.T_function)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/theano/compile/tests/test_function_module.py", line 42, in test_none
raise KnownFailureTest('See #254: Using None as function output leads to [] return value')
KnownFailureTest: See #254: Using None as function output leads to [] return value

======================================================================
ERROR: test002_generator_one_scalar_output (theano.sandbox.scan_module.tests.test_scan.TestScan)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/theano/sandbox/scan_module/tests/test_scan.py", line 474, in test002_generator_one_scalar_output
raise KnownFailureTest('Work-in-progress sandbox ScanOp is not fully '
KnownFailureTest: Work-in-progress sandbox ScanOp is not fully functional yet

======================================================================
ERROR: test003_one_sequence_one_output_and_weights (theano.sandbox.scan_module.tests.test_scan.TestScan)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/theano/sandbox/scan_module/tests/test_scan.py", line 512, in test003_one_sequence_one_output_and_weights
raise KnownFailureTest('Work-in-progress sandbox ScanOp is not fully '
KnownFailureTest: Work-in-progress sandbox ScanOp is not fully functional yet

======================================================================
ERROR: test_alloc_inputs2 (theano.scan_module.tests.test_scan.T_Scan)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/theano/scan_module/tests/test_scan.py", line 2844, in test_alloc_inputs2
"This tests depends on an optimization for scan "
KnownFailureTest: This tests depends on an optimization for scan that has not been implemented yet.

======================================================================
ERROR: test_infershape_seq_shorter_nsteps (theano.scan_module.tests.test_scan.T_Scan)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/theano/scan_module/tests/test_scan.py", line 3040, in test_infershape_seq_shorter_nsteps
raise KnownFailureTest('This is a generic problem with infershape'
KnownFailureTest: This is a generic problem with infershape that has to be discussed and figured out

======================================================================
ERROR: test_outputs_info_not_typed (theano.scan_module.tests.test_scan.T_Scan)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/numpy/testing/decorators.py", line 213, in knownfailer
raise KnownFailureTest(msg)
KnownFailureTest: This test fails because not typed outputs_info are always gived the smallest dtype. There is no upcast of outputs_info in scan for now.

======================================================================
ERROR: test_arithmetic_cast (theano.tensor.tests.test_basic.test_arithmetic_cast)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/theano/tensor/tests/test_basic.py", line 5583, in test_arithmetic_cast
raise KnownFailureTest('Known issue with '
KnownFailureTest: Known issue with numpy >= 1.6.x see #761

======================================================================
ERROR: test_abs_grad (theano.tensor.tests.test_complex.TestRealImag)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/numpy/testing/decorators.py", line 213, in knownfailer
raise KnownFailureTest(msg)
KnownFailureTest: Complex grads not enabled, see #178

======================================================================
ERROR: test_complex_grads (theano.tensor.tests.test_complex.TestRealImag)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/numpy/testing/decorators.py", line 213, in knownfailer
raise KnownFailureTest(msg)
KnownFailureTest: Complex grads not enabled, see #178

======================================================================
ERROR: test_mul_mixed (theano.tensor.tests.test_complex.TestRealImag)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/numpy/testing/decorators.py", line 213, in knownfailer
raise KnownFailureTest(msg)
KnownFailureTest: Complex grads not enabled, see #178

======================================================================
ERROR: test_mul_mixed0 (theano.tensor.tests.test_complex.TestRealImag)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/numpy/testing/decorators.py", line 213, in knownfailer
raise KnownFailureTest(msg)
KnownFailureTest: Complex grads not enabled, see #178

======================================================================
ERROR: test_mul_mixed1 (theano.tensor.tests.test_complex.TestRealImag)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/numpy/testing/decorators.py", line 213, in knownfailer
raise KnownFailureTest(msg)
KnownFailureTest: Complex grads not enabled, see #178

======================================================================
ERROR: test_polar_grads (theano.tensor.tests.test_complex.TestRealImag)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/numpy/testing/decorators.py", line 213, in knownfailer
raise KnownFailureTest(msg)
KnownFailureTest: Complex grads not enabled, see #178

======================================================================
ERROR: test_gradient (theano.tensor.tests.test_fourier.TestFourier)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/numpy/testing/decorators.py", line 213, in knownfailer
raise KnownFailureTest(msg)
KnownFailureTest: Complex grads not enabled, see #178

======================================================================
ERROR: theano.tensor.tests.test_opt.test_log_add
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/usr/local/lib/python2.7/dist-packages/theano/tensor/tests/test_opt.py", line 1508, in test_log_add
raise KnownFailureTest(('log(add(exp)) is not stabilized when adding '
KnownFailureTest: log(add(exp)) is not stabilized when adding more than 2 elements, see #623

======================================================================
ERROR: Currently Theano enable the constant_folding optimization before stabilization optimization.
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/usr/local/lib/python2.7/dist-packages/theano/tensor/tests/test_opt.py", line 3068, in test_constant_get_stabilized
"Theano optimizes constant before stabilization. "
KnownFailureTest: Theano optimizes constant before stabilization. This breaks stabilization optimization in some cases. See #504.

======================================================================
ERROR: test_dot (theano.tests.test_rop.test_RopLop)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/theano/tests/test_rop.py", line 277, in test_dot
self.check_rop_lop(tensor.dot(self.x, W), self.in_shape)
File "/usr/local/lib/python2.7/dist-packages/theano/tests/test_rop.py", line 191, in check_rop_lop
raise KnownFailureTest("Rop doesn't handle non-differentiable "
KnownFailureTest: Rop doesn't handle non-differentiable inputs correctly. Bug exposed by fixing Add.grad method.

======================================================================
ERROR: test_elemwise0 (theano.tests.test_rop.test_RopLop)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/theano/tests/test_rop.py", line 280, in test_elemwise0
self.check_rop_lop((self.x + 1) ** 2, self.in_shape)
File "/usr/local/lib/python2.7/dist-packages/theano/tests/test_rop.py", line 191, in check_rop_lop
raise KnownFailureTest("Rop doesn't handle non-differentiable "
KnownFailureTest: Rop doesn't handle non-differentiable inputs correctly. Bug exposed by fixing Add.grad method.

----------------------------------------------------------------------
Ran 2441 tests in 807.791s

FAILED (errors=18)


Thanks!


java.lang.AssertionError: expected

My TestNG test implementation throws an error despite the expected value matches with the actual value.


Here is the TestNG code:



@Test(dataProvider = "valid")
public void setUserValidTest(int userId, String firstName, String lastName){
User newUser = new User();
newUser.setLastName(lastName);
newUser.setUserId(userId);
newUser.setFirstName(firstName);
userDAO.setUser(newUser);
Assert.assertEquals(userDAO.getUser().get(0), newUser);
}


The error is:



java.lang.AssertionError: expected [UserId=10, FirstName=Sam, LastName=Baxt] but found [UserId=10, FirstName=Sam, LastName=Baxt]


What have I done wrong here?


Android AVD not linking or diplaying app

Hello people of Stack Overflow!


I've been trying to create my first app for a while now using eclipse.


I understand how to create the layout in XML. I understand Java enough to do build simple java applications and work with packages and such.


I've been following this tutorial series and have come to a point where i need to test the app. [youtube]watch?v=B-HL6QTdOXs ** sorry: not enough rep to make three links **


This is where the trouble is - testing. The emulator from AVD doesn't show my app's icon anywhere in the menu. I have also tried Genymotion, and the same issue occurs.


As far as I know, all of the paths and uri's are correct to link everything up, but it just won't show.


The directory structure containing AVD Manager and SDK Manager are both located directly on the C drive. My eclipse workspace is under my user's folder.


When I go to run it as an android application, the option doesn't appear sometimes, but once I clean the project, the button appears again.


Below are some screen-captures I took and uploaded to imgur. I hope somebody can help me.


~ imgur album ~


http://imgur.com/a/VAKuk


~ glewinfo paste ~ http://pastebin.com/1gUJ6xDb


Thanks for the attention! :)


I'm willing to try Android Studio as well, but eclipse I am accustomed to from school.


Turning Off Autocorrect on Internet Explorer by passing in parameter in webdriver/protractor?

I am testing on IE11 (Windows7) using Protractor 1.5.0, and my tests are failing because the text that I post on a message forum are being autocorrected. Is there a way to turn off autocorrect by tweaking something in my config file? Such a tweak would be ideal since I'm experiencing the same issue when I run the tests remotely on Sauce Labs.


Not experiencing this issue on Firefox, Safari, or Google Chrome.


Example:



Expected 'sint quis impedit officiis harum cupiditate facilis maiores aliquam repellendus ex voluptatem commode voluptatibus incident dolor' to equal 'sint quis impedit officiis harum cupiditate facilis maiores aliquam repellendus ex voluptatem commodi voluptatibus incidunt dolor'.



Get all element attributes using protractor

According to the documentation, to get a single attribute by name you can use getAttribute() :



var myElement = element(by.id('myId'));
expect(myElement.getAttribute('myAttr')).toEqual('myValue');


But how can I get all of the attributes that an element has?


There is no information about it at the Protractor API documentation page.


RhinoMock in Specification.Machine, expectation violation expected 1, get 0

I have a problem with mocking my interface, I want to check if the method of my interface is call, so my interface/class looks like this:



IMyInterface
{
IMyInterface Method(string lel);
}

MyClass : MyInteface
{
public override IMyInterface Method(string lel)
{
//do something;
}
}

AnotherClass
{
private IMyInterface _instance;
public void AnotherMethod()
{
//do something with this instance of IMyInstance
}
}


and my test class looks like this:



[Subject(AnotherClass)]
abstract class AnotherClassTest : AnotherClass
{
protected static IMyInterface MyInterface;
Establish context = () =>
{
MyInterface = fake.an<IMyInterface>(); // MockRepository.GenerateStrictMock<IMyInterface>(); this also doesn't work properly.
MyInterface.Stub(x => x.Method("lel")).IgnoreArguments().Return(MyInterface);
}
}

[Subject(AnotherClass)]
class When_cos_tam_cos_tam : AnotherClassTest
{
Establish context = () =>
{
//MyInterface.Stub(x => x.Method("lel")).IgnoreArguments().Return(MyInterface);
}
Because of = () => sut.AnotherMethod();
It Should_cos_tam = () => MyInterface.AssertWasCalled(x => x.Method("lel"));
}


And I'm getting following error:



Rhino.Mocks.Exceptions.ExpectationViolationException' occurred in Rhino.Mocks.dll
IMyInterface.Method("lel")); Expected #1, Actual #0.

java swing fest testing without code

I'm looking for any possible method to automate testing of JAVA swing applications whose source code is not available. I have tried several tools(JUnit,Fest,etc), but they all require either source code(internal tests) or they need the main class of the application to be tested. Any alternative software for the same or any tool to determine component and component properties of the application (the source code not being a pre-requisite) will be useful. Thanks.


Java uses more memory than anticipated

Ok, so I try to do this little experiment in java. I want to fill up a queue with integers and see how long it takes. Here goes:



import java.io.*;
import java.util.*;

class javaQueueTest {
public static void main(String args[]){
System.out.println("Hello World!");
long startTime = System.currentTimeMillis();
int i;
int N = 50000000;

ArrayDeque<Integer> Q = new ArrayDeque<Integer>(N);
for (i = 0;i < N; i = i+1){
Q.add(i);
}
long endTime = System.currentTimeMillis();
long totalTime = endTime - startTime;
System.out.println(totalTime);
}
}


OK, so I run this and get a



Hello World!
12396


About 12 secs, not bad for 50 million integers. But if I try to run it for 70 million integers I get:



Hello World!
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.lang.Integer.valueOf(Integer.java:642)
at javaQueueTest.main(javaQueueTest.java:14)


I also notice that it takes about 10 mins to come up with this message. Hmm so what if I give almost all my memory (8gigs) for the heap? So I run it for heap size of 7gigs but I still get the same error:



javac javaQueueTest.java
java -cp . javaQueueTest -Xmx7g
Hello World!
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.lang.Integer.valueOf(Integer.java:642)
at javaQueueTest.main(javaQueueTest.java:14)


I want to ask two things. First, why does it take so long to come up with the error? Second, Why is all this memory not enough ? If I run the same experiment for 300 million integers in C (with the glib g_queue) it will run (and in 10 secs no less! although it will slow down the computer alot) so the number of integers must not be at fault here. For the record, here is the C code:



#include<stdlib.h>
#include<stdio.h>
#include<math.h>
#include<glib.h>
#include<time.h>

int main(){
clock_t begin,end;
double time_spent;
GQueue *Q;

begin = clock();
Q = g_queue_new();
g_queue_init(Q);
int N = 300000000;
int i;
for (i = 0; i < N; i = i+1){
g_queue_push_tail(Q,GINT_TO_POINTER(i));
}
end = clock();
time_spent = (double)(end - begin) / CLOCKS_PER_SEC;
printf("elapsed time: %f \n",time_spent);

}


I compile and get the result:



gcc cQueueTest.c `pkg-config --cflags --libs glib-2.0 gsl ` -o cQueueTest
~/Desktop/Software Development/Tests $ ./cQueueTest
elapsed time: 13.340000

Is it wise to verify runtime complexity of code with timer tests?

Is it wise to verify runtime complexity of code with timer tests?


For example:



x=very large input;

timer start;
foo(x);
timer end;

print time;


So if the time was 0 seconds then that means foo runs on O(n) or less and if timer is say 30-60 seconds, that means the runtime has to be larger than O(n)?


In general, does more time a function takes then that means it's runtime complexity is larger?


Is there a python-based utility for testing examples in comments

Is there a python-based utility for testing examples in function comments? For example, given the following code in orwell_1984.py:



def two_plus_two():
"""Returns 2 + 2 according to Orwell's 1984"""
# EX: two_plus_two() => 5
return 5


The utility would do the equivalent of the following:



import orwell_1984
verify_test(orwell_1984.two_plus_two, (), 5)


where verify_test invokes the passed-in function with the specified parameters, and then makes sure equal to the expected value.


A while back I wrote something to do this in perl; see http://www.cs.nmsu.edu/~tomohara/useful-scripts/test-perl-examples.perl. Before trying to port that, I am hoping to find a python-based utility that does something similar.


dimanche 28 décembre 2014

Verifying the search results populated in a table

There is a tab to search among the displayed list of vehicles and their details in the webpage. Here for example I am searching for the phone numbers which are all containing 407, It will return the phone numbers which are all contains 407. I am able to do this through robot-framework. But the challenge here is that I should verify the returned results. Those results are populated in a table.


Table code looks like this



<tr class="x-grid-row">
<td class=" x-grid-cell x-grid-cell-LandmarkEditIcon x-action-col-cell x-grid-cell-first">
<div class="x-grid-cell-inner " style="text-align: left; ;">
<img alt="" src="/assets/ui/images/menu/edit-menu-item.png" class="x-action-col-icon x-action-col-0 small-icon-image" data-qtip="Edit STEIN BEER GARDEN new">
</div>
</td>
<td class=" x-grid-cell x-grid-cell-gridcolumn-1225 " data-qtip=" Double click to view the vehicles present. ">
<div class="x-grid-cell-inner " style="text-align: left; ;">STEIN BEER GARDEN new</div>
</td>
<td class=" x-grid-cell x-grid-cell-gridcolumn-1226 ">
<div class="x-grid-cell-inner " style="text-align: left; ;">test reg 145</div>
</td>
<td class=" x-grid-cell x-grid-cell-gridcolumn-1227 ">
<div class="x-grid-cell-inner " style="text-align: left; ;">0</div>
</td>
<td class=" x-grid-cell x-grid-cell-gridcolumn-1228 ">
<div class="x-grid-cell-inner " style="text-align: left; ;">895 Villa Street, Mountain View, CA, 94041</div>
</td>
<td class=" x-grid-cell x-grid-cell-gridcolumn-1229 ">
<div class="x-grid-cell-inner " style="text-align: left; ;">circle</div>
</td>
<td class=" x-grid-cell x-grid-cell-gridcolumn-1230 x-grid-cell-last">
<div class="x-grid-cell-inner " style="text-align: left; ;">Phone Number</div>
</td>
</tr>


I just pasted a row here. Table structure looks like the above row. I need to verify the phone numbers whether it is matching the search criteria or not. Im new to robot framework. Any help would be appreciated. Thanks in advance.


Note : Id's are dynamic


how to test the Fixture Class explicitily?

I was trying to ensure the fixture reset the state but now I am asked to test the fixture class explicitly !!! I am not sure what does that mean when doing the strategy below is reset the state



import sys
import testtools
import fixtures

class MyTestExample_Fixture(fixtures.Fixture):
"""docstring for MyTestExample_Fixture"""
def setUp(self):
super(MyTestExample_Fixture, self).setUp()
print >> sys.stderr, "ran Fixture"
print >> sys.stderr, "state Reset"


class MyTestExample(testtools.TestCase):
"""docstring for MyTestExample"""

def setUp(self):
super(MyTestExample, self).setUp()
self.useFixture(MyTestExample_Fixture())
print >> sys.stderr, "setUp"

def test_1(self):
print >> sys.stderr, "test_1"

def test_2(self):
print >> sys.stderr, "test_2"

$python -m nose.core testExample.py
### the result:

ran Fixture
state Reset
setUp
test_1
.ran Fixture
state Reset
setUp
test_2
.
----------------------------------------------------------------------
Ran 2 tests in 0.001s

OK

Selenium server standalone for Internet Explorer

I'm using selenium server standalone to run some selenium tests on Internet explorer but it opens an embedded Internet Explorer which has 2 drowbacks



  1. my sites renders differently (every thing is good in IE but not in embedded IE)

  2. it does not use internet explorer settings so even while I've turned off script debugging in IE I still get some errors in Embedded IE.


anybody has any idea how can i make selenium standalone server to work with IE, not Embedded IE? I use latest version of both server and driver which is 2.44


Thanks


Branch coverage with missing else

Given some code:



int(x) {
if(x==0) { dosomething }
}


If I run this with two test cases: t1 = <0> and t2 = <2>, will this provide me with 100% branch coverage even though the else statement is missing?


In other words, does the else statement need to exist to achieve 100% branch coverage?


Thanks


Robolectric test cases

Got a basic question on Robolectric testing framework.


Just managed to get the framework working in my Android studio for my test project, and now got a basic question. I have 2 test classes inside my test package and every time when i run the test cases using



.gradlew test


its executing only one class, even though there are 2 test classes inside the package. Searched for some good documentation around running the test scripts, didn't find one yet. Can you please point me to some documentation, or let me know how i can test both the classes in the package. This is my sample dummy test case. For testing, i kept the same piece of code is 2 different classes, but its executing only one. Not sure how its picks which class to RUN.


*/



@RunWith(RobolectricTestRunner.class)
public class DummyTest2 {
@Before
public void setup() {
//do whatever is necessary before every test
}

@Test
public void testShouldFail() {
Assert.assertTrue(Boolean.TRUE);
}
}

samedi 27 décembre 2014

C# asmx webservice test framework development

My development team has created services jobs (asmx WS), which get called from a UI and each jobs has own classes written. I have to build a test framework, which should accommodate current Jobs and also future jobs or changes to existing jobs.


I need expert advice on this (Framework Structure). as of now, I have added web references to services and calling different classes (written for Jobs), but it more looks like UNit Testing, which I think is not very good to start with.


kill running process in test golang

Writing a program based on http.ListenAndServe and writing the tests i've hit a stump. my test looks like this



package main

import (
"testing"
"../"
)

func TestServer(t *testing.T) {
Server.Server()
}


and the tested function



package Server

import (
"net/http"
)

var (
DefaultPort = "8080"
DefaultDir = "."
)

func Server() {
port := DefaultPort
directory := DefaultPort

http.ListenAndServe(":"+port, http.FileServer(http.Dir(directory)))
}


my goal is to be able to run test and after the server listener is launched kill it and return a success message


How compare my code? (Testing)

i have a graph partitioning algorithm code , i did some test on code, and i get execution time ex: for 100 node execution time: 5.10s . now i have a graph (node- execution time) , but to improve the code , i want to compare .


for example my code 100 node, 5.10s other optimal : 100 node 4.0s


i can say "yes i have to improve my code my time is not good enough" .


my question is how can i do that comparison .


(if i am not clear enough i am sorry, I will getting better the question from feedbacks)


Approach to testing lib/ routines (i.e. not Models or Controllers) in Rails

I'm developing a rails application that functions as a directory/phonebook application for a small organization.


Background


The application basically consolidates information from multiple internal Web APIs and stores them in the local SQLite3 DB. The application is basically a glorified front-end that reads directly from this DB.


Every X hours, a rake task is scheduled that pulls information from the Web APIs into the DB. On the first run, the DB is obviously blank but on subsequent runs it's updating existing data and creating any new records if necessary. All the logic to query the api's and insert into the DB is in lib/update.rb


Question


How should I go about setting up tests for the above workflow? I know rails has very nice support for fixtures. But in this case I don't want to set up pre-configured data as a fixture. I want to mock Web API calls and run it through lib/update.rb to ensure that it is getting correctly inserted with the right logic. I also want to do several runs to mock the first run and subsequent runs and assert the correct behavior. Should I be putting everything in /test/unit/update_test.rb as a unit test?


Also how do I manage data between Unit tests and Model Tests? I will definitely be using fixtures for Models, so should I clear the DB after running Unit tests?


Thanks!


Testing interfaces rather then classes PHPSpec

When unit testing classes with PHPSpec I want to know if its possible to test against their interfaces rather then the concrete class implementation?


Why? Because if classes are following the interface properly then they should all be returning the same results no?


paypal sandbox notifcations - where do buyer notifications go?

I am using the paypal sandbox to test the full process for a buy now button. Everything goes fine, except, I cannot find where confirmation emails go for the buyer. I have logged into the sandbox under the buyer email. I have logged into the developer email and checked the notifications sent to the developer for all test accounts, and notifications for the individual test accounts. The test seller account gets a notification email that there was a sale, but the test buyer account gets absolutely nothing.


Also, this is testing the path where the buyer pays with a simulated credit card number, and is not logged into paypal. (the buyer gives their email associated with their paypal test account, but this is the path where the buyer indicates they want to pay with a credit card, not their paypal account.)


Any help would be greatly appreciated.


Thanks.


How to make Selenium test run both on local and remote server with Python Django?

Many Selenium test examples begin like this:



from selenium import webdriver

driver = webdriver.Firefox()
driver.self.driver.get("some hardcoded url")


Is there way to avoid hardcoding urls into the test and somehow retreive domain name from the environment the test was launched in?


vendredi 26 décembre 2014

Mocking public method of testing object

I have a class that has two public method. It looks something like following:



class IntRequest
{
public function updateStatus()
{
$isValid = $this->checkValidity();

// ... next is a complex logic that use $isValid
}

/**
* @return bool
*/
public function isValid()
{
// another complex logic
}
}


I need to test a first function - IntRequesr::updateStatus; however I need to run to tests. The first one with IntRequests::isValid returns false and the second one with true as a result of IntRequests::isValid


I try to mock that function but tests run with calling actual IntRequests::isValid not mocked one.


My testing code is



$intRequest = new IntRequests;

$mock = m::mock($intRequest);
$mock->shouldReceive('isValid')
->once()
->andReturn(true);

$res = $mock->updateStatus();

$this->assertTrue($res);


I've try to call $res = $intRequest->updateStatus() instead of $res = $mock->updateStatus() but with no luck.


So, I am wondering is it possible to mock function that is called inside testing method?


External Test Functions (Rails)

I have a method I am writing in test_helper.rb to simplify some bulk testing for HTML elements. While I am experimentally developing this function now, I would like to spin it off into its own stand-alone function. If this were application code, it would go in lib/ (I think?), but since it is test code, where would I put it? Is there an location for external code for tests or would I just create a new file/directory to hold my re-usable code and require it inside test_helper.rb?


Must collisions be detected "before" or "after" moving the object?

I want to ask this question the most theoretically I can. Forget about programming languages and APIs, please.


Let's suppose that we have a square that we move with the keyboard. In the screen where our square lives, there exists an obstacle too. This obstacle is a rectangle, and it is solid. This means that if, for example, our square hits it from the left side (the left side of the obstacle), the square won't be able to advance in that direction (like a wall).


This is very simple, so now, I think there are two main ways to approach this problem. Let's suppose that we use hit-testing collision for detecting it:


-Make the move (with time-difference between frames), and see if the new move made a collision. If the new move made a collision, "correct" the coordinate that made the hit (it could be x or y).


-See if updating x will make a collision. If it makes it, don't update x (or update it at it's maximum before collisioning the obstacle). Make the same with y.


Actually, I'm using the second method, and it works perfectly! First, I tried the first method, but I saw a big problem with it. When you make a collision after moving, it means that my player got "into" the obstacle. But the collision could have been caused from a lot of possible directions, so how will I know which one is the correct direction in which I have to correct the coordinates? It may be done well, but it's complicating the code when I can simply choose the second way.


I'm doing this in SDL, and the second way works nicely.


What's your opinion?


Unable to run GWT Test (Maven)

I'm trying to run $mvn clean test for this GWT Library however it keeps on throwing this error:



[ERROR] Unable to find type 'java.lang.Object'
[ERROR] Hint: Check that your module inherits 'com.google.gwt.core.Core' either directly or indirectly (most often by inheriting module 'com.google.gwt.user.User')
Tests run: 10, Failures: 0, Errors: 10, Skipped: 0, Time elapsed: 12.069 sec <<< FAILURE!
testAgoHours(com.github.kevinsawicki.timeago.client.TimeAgoTest) Time elapsed: 11.698 sec <<< ERROR!
com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries)
at com.google.gwt.dev.cfg.ModuleDef.checkForSeedTypes(ModuleDef.java:559)
at com.google.gwt.dev.cfg.ModuleDef.getCompilationState(ModuleDef.java:363)
at com.google.gwt.junit.JUnitShell.runTestImpl(JUnitShell.java:1342)
at com.google.gwt.junit.JUnitShell.runTestImpl(JUnitShell.java:1309)
at com.google.gwt.junit.JUnitShell.runTest(JUnitShell.java:653)
at com.google.gwt.junit.client.GWTTestCase.runTest(GWTTestCase.java:441)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at com.google.gwt.junit.client.GWTTestCase.run(GWTTestCase.java:296)
at junit.framework.TestSuite.runTest(TestSuite.java:232)
at junit.framework.TestSuite.run(TestSuite.java:227)
at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testAgoDay(com.github.kevinsawicki.timeago.client.TimeAgoTest) Time elapsed: 0.


Correct me if I am wrong but I think the project have the correct GWT test structure. So I am not sure why it keeps throwing this error. If I run mvn gwt:test it passes but it doesn't really run any test at all.


Robotium mock application status

Most of my activities can only work after logged in. But every test case create activity separately. How can I do the login procedure in robotium test?


Test Driven Development Problems

I want to test my user register systems. We have got 3 obstacles in our test driven approach.



  1. Captcha

  2. Email confirmation

  3. Sms 2 factor authentication.


These are external systems. How can I stub them without comprising my system security ?


Testing rails using capybara in drone is so slow

I have a problem here. so I found out that my rails app if tested locally in my laptop(OSX) it only need 10-15 minutes to finish. However, when I test it using drone it need about 2 hours. I checked it and found that capybara is the problem. when the test is using capybara it became very slow.


any body have an idea why is it happening?


How to allow this login class to pass junit testing without errors?

In the assignment I am working on I have to test my code using junit plugin in netbeans. my code consists of too many classes that depends on each other.


I just need to know how to do it for one of these classes to be able to apply the same logic on the rest of the classes.


So, here is the login class that I want to test:



package StateMachine;

import Objs.*;
import main.promptFactory;

/**
*
* this class asks the logged in user for its username to check its validity and
* refers to its object in the loggedInUser variable in the motherMachine.
* If user is validated, it changes myState to an object of idle.
*
*/
public class login implements stateInterface{

@Override
public void processCode(MotherMachine m, int code) {

String name = promptFactory.promptString("Please enter your username to login:");
try{
int i = Integer.parseInt(name);

if ((i == 00) || (i == 01) ||(i == 02) ||(i == 03) ||(i == 04) ||(i == 05) ||(i == 06)){
System.out.println("Log in before asking for any transaction");
}}
catch(NumberFormatException ex){
userObj newuser = IO.users.search(name); //make this true if user is valid using above code

if (newuser != null){ //if user is valid, initiate session
m.loggedInUser = newuser;
main.test.code = -2;
m.setState(new idle());


}
else if (newuser == null){ //if name is invalid
System.out.println("User does not exist"); //don't accept, prompt again
}
}


}

}


this class implements the following state interface:



/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
package StateMachine;

/**
*An interface for the states to implement.
*/
public interface stateInterface {
public void processCode(MotherMachine m, int code);
}


This is the MotherMachine class that has myState and LoggedInuser variables as well as the setState method and the processCode method:



package StateMachine;

import Objs.userObj;

/**
*
This package is the thinking brain behind the whole system. The event-driven design traversers between the classes based on
the input (transaction code).
* These classes take in the input from user to process it.
*
MotherMachine.java: This class is the starting point of the program, the transaction code inputted is passed to the
* current (myState) object. [Could be to the login class, or create or delete or or or..]

*
*/
public class MotherMachine{

private stateInterface myState;
public userObj loggedInUser = new userObj();

public MotherMachine(){
this.setState(new login()); //initial state is alway login (the default)
}

public void setState(stateInterface newState){
myState = newState;
}
public void processCode(int code){
// System.out.println(out);
if((code <= 06 || code >= 00) && !(myState instanceof login)){ //if a session is in order and a valid transaction is submitted
myState.processCode(this, code); //the current state will recieve the code

}
else if (code == -1 && (myState instanceof login)){ //if a session hasn't yet started and the login code has been submitted
myState.processCode(this, code); //pass code to the current state (will be login)
/////////////////////GO TO LOGIN CLASS//////////////////
}
else {
System.out.println("Invalid transaction code");
}
}
}


the userObj class contains the objects needed by any users and heavily used while login.


and the promptfactory class Factory is just to prompt user for integers,double, string, or chars.


Please I need a detailed answer for this to understand exactly what to do and to be able to apply it on the rest of the classes. ( for junitTest classes / stubs and anything else that can be used)


Appreciating your help.


Thank you ..


What's the conventional location for storing stub classes?

I have a system under test and I need to create some stub classes to isolate my SUT. I'm using NetBeans and my question is, should the test classes be saved in the test folder along with the test class or in the main package next to the stubee(?)?


I have heard some creating stubs as inner classes in the test class. I don't want to do that because a lot of my stubs will be shared among multiple classes, so it's better design to make them stand alone and I want to follow the convention of where they're usually stored.


Thanks.


Automated test fails on button click

I downloaded a simple app from appium github and tried to write an automated test for it using Appium Server (version 1.3.3). Here is my code



import io.appium.java_client.AppiumDriver;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.openqa.selenium.By;
import org.openqa.selenium.remote.DesiredCapabilities;

import java.net.MalformedURLException;
import java.net.URL;
import java.util.concurrent.TimeUnit;

import static org.testng.AssertJUnit.assertEquals;

public class AppiumDemo {

private AppiumDriver ad;

@Before
public void setupTest(){
DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability("appium-version", "1.3.3");
capabilities.setCapability("platformName", "iOS");
capabilities.setCapability("platformVersion", "7.1");
capabilities.setCapability("deviceName", "iPhone 5s");
capabilities.setCapability("app", "/Users/admin/Downloads/TestApp.app");
try {
ad = new AppiumDriver(new URL("http://ift.tt/1eWSHgW"), capabilities);
} catch (MalformedURLException e) {
e.printStackTrace();
}

}

@After
public void tearDown(){
if(ad != null) ad.quit();
}

@Test
public void simpleTest() {
ad.findElement(By.name("IntegerA")).sendKeys("11");
ad.findElement(By.name("IntegerB")).sendKeys("1");
ad.findElement(By.name("ComputeSumButton")).click();
int ans = Integer.parseInt(ad.findElement(By.name("Answer")).getText());
assertEquals(ans, 12);
ad.manage().timeouts().implicitlyWait(60, TimeUnit.SECONDS);
}

}


it opens the simulator, enters 2 numbers, when when should click on the button to calculate the sum, test crashes, simulator is killed. in intellij it shows that it failed on the line I press click. and - Process finished with exit code 255. I use Yosemite , v. 10.10.1 installed on wmware. Please help me, what can be the reason of it ? thanks a lot.


jeudi 25 décembre 2014

How to cover multiple test cases in one automation script

In many cases I find it redundant to write a script for each little test case. How can I use Microsoft VS to write a script that can test more than one test case and report the result to each associated test case in Microsoft MTM without running each test cases separately. Say, for example, I have a yes/No/Cancel dialog that pops up and there is a test case to verify for each of the three cases. All three cases can be verified in one script. Would it be possible to associate each test case to the same script and get the results reported to each one by running the script only once?


What is the capture button element id in the view finder of the camera in selendroid. I am not able to inspect it

I want to know what is the capture button element id in the view finder of the camera in selendroid. I am not able to inspect it. How to inspect the system apps. Here I am invoking the camera native app through my application to store the photo. But I am not able to inspect and get the element ID of the camera capture button. Please help


How to make Fitnesse fork a new process for every test when run the suite

I am using Fit library with custom fixtures. I see that it is the same SUT java process exists for all of the tests under the test suite. Each of the tests in suite is a making a connection to the TcpServer. If run as a suite I see the port blocked and address bind issues for the consequent tests. I tried to work with introducing Sleep between the tests, that did not help.


I want to know how can I run the suite in a way where each test run is run in a new process ?


Python nosetest ValueError exception

I have some function:



def reverse_number(num):
try:
return int(num)
except ValueError:
return "Please provide number"


and test for this:



assert_raises(ValueError, reverse.reverse_number, "error")


But when I call nosetests I got this error:



AssertionError: ValueError not raised


What am I doing wrong?


Cucumber Java - How to use returned parameters from a step in a new step?

I'm using cucumber with java in order to test my app ,

I would like to know if it is possible to take an object returned from first scenario step and use it in other steps. Here is an example for the desirable feature file and Java code :



Scenario: create and check an object
Given I create an object
When I am using this object(@from step one)
Then I check the object is ok


@Given("^I create an object$")
public myObj I_create_an_object() throws Throwable {
myObj newObj = new myObj();
return newObj;
}

@When("^I am using this object$")
public void I_am_using_this_object(myObj obj) throws Throwable {
doSomething(obj);
}

@Then("^I check the object is ok$")
public void I_check_the_object_is_ok() throws Throwable {
check(obj);
}


I rather not to use variables in the class

(Because then all method variables will be in class level)

but i'm not sure it's possible.


Is it possible to use a return value in a method as an input in the next step?


Mockito Matchers any Map

How can I use any map in mockito? I tried with following codes



when(mockedService.patch("1", Matchers.<Map<String, Object>>any())).thenReturn(object);


and with:



when(mockedService.patch("1", anyMap())).thenReturn(object);


But it returns:



org.mockito.exceptions.misusing.InvalidUseOfMatchersException:
Invalid use of argument matchers!
2 matchers expected, 1 recorded.


It works only when I put any(String.class):



when(mockedService.patch(any(String.class), Matchers.<Map<String, Object>>any())).thenReturn(object);


But I want to have option of puting actual values instead of any String


Spring's MockMvc perform patch

How can I perform patch with Spring's MockMvc? I tried with following code:



@Test
public void should_patch_success() throws Exception{
mockMvc.perform(
request(HttpMethod.PATCH, "/Some/1")
.content(convertObjectToJson(new User()))
.contentType(MediaType
.parseMediaType("application/json;charset=UTF-8")))
.andExpect(status().isOk());
}


I'm sure that method convertObjectToJson works and MediaType is set correctly but it returns:



org.mockito.exceptions.misusing.InvalidUseOfMatchersException:
Invalid use of argument matchers!
2 matchers expected, 1 recorded.

protractor e2e tests run at chrome, but halt at firefox

** run my Angular app scenarios with chrome the scenarios are run successfully, but the halt is occured at firefox new version 35.0b6** Any one please help me thanks in advance.


i'm using protractor 1.4.0, My scenario



describe('99ccs e2e testing', function() {


it('check it have a title 99CCS', function() {
browser.get('http://ift.tt/1zjFzwD');

//it checks the "http://ift.tt/1JRoan9" page contains a title "99CCS"
expect(browser.getTitle()).toEqual('99CCS');

//it checks when user enter the URL as "http://ift.tt/1JRoan9" it navigates to "http://ift.tt/1zjFzwD"
browser.get('http://ift.tt/1JRoan9');
expect(browser.getLocationAbsUrl()).toBe('http://ift.tt/1zjFzwD');

//it checks when user enter the URL as "http://ift.tt/1JRoan9" it navigates to Login page or not
browser.getLocationAbsUrl().then(function(url) {
expect(url.split('#')[1]).toBe('/login');
});
expect(browser.get('http://ift.tt/1JRoan9')).toEqual(browser.get('http://ift.tt/1zjFzwD'));

//it checks if we give any location url from 99ccs.com/ccsnew without login it navigates to Login page or not
expect(browser.get('http://ift.tt/1zjFybS')).toEqual(browser.get('http://ift.tt/1zjFzwD'));
});


});


i got an error at console: