mercredi 30 novembre 2016

How to send data into a channel for testing purposes in Golang?

I'm trying to write unit tests for functions for a project in Go and I'm coming up against a problem I've never encountered before. The function is used in a loop that monitors Slack (the messaging platform) live for certain event structures (defined by a library I'm using) and responds according depending on the event returned (using a switch). Here's (most of) the code:

func botLoop(s *SlackBot) {
    select {
    case rtmEvent := <-s.Rtm.IncomingEvents:
        switch ev := rtmEvent.Data.(type) {
        case *slack.MessageEvent:

            o, err := s.HandleCommand(ev)
            if err != nil {
                fmt.Printf("%s\n", err)
                s.Say(ev.Channel, "%s\n", err)
                break
            }
            s.Say(ev.Channel, o)

         case *slack.LatencyReport:
             fmt.Printf("Current latency: %v\n", ev.Value)

         default:
             // fmt.Printf("Unexpected: %v\n", msg.Data)
    }
}

How can I pass "rtmEvents" into the s.Rtm.IncomingEvent channel to trigger my code for testing purposes? Is there any way to reliably do this?

Here's the documentation for the API library I'm using, if that makes things any easier.

How to include test classes into shadowJar?

I am using shadow Gradle plugin to build JAR, containing all referenced jars inside.

In my build.gradle I have only

apply plugin: "com.github.johnrengelman.shadow"

and

jar {
    manifest {
        attributes 'Main-Class': 'MYCLASS'
    }

}

related to that. I don't know, how it knows, what to build, but it works.

Now, is it possible, to include test classes too?

Testing OnPush components in Angular 2

I am having trouble testing a component with OnPush change detection strategy.

The test goes like this

it('should show edit button for featured only for owners', () => {
    let selector = '.edit-button';

    component.isOwner = false;
    fixture.detectChanges();

    expect(fixture.debugElement.query(By.css(selector))).toBeFalsy();

    component.isOwner = true;
    fixture.detectChanges();

    expect(fixture.debugElement.query(By.css(selector))).toBeTruthy();
});

If I use Default strategy it works as expected, but with OnPush the change to isOwner is not rerendered by the call to detectChanges. Am I missing something?

PHPUnit in a legacy enviroment

I'm starting to setup PHPUnit (v4.8) to be used in my 'legacy' code (it's not so legacy, but it have bad programming pratices).

The structure of my folders is as follows

/home
  -phpunit.xml
  /folder1
  /folder2
  /folder3
  /vendor
  /tests
   -Test1.php
  /includes
   -functions.php
   /libs
    -User.php
    -TableClass.php
    ....

functions.php

<?php //require_once $_SERVER['DOCUMENT_ROOT'] . "/home/vendor/autoload.php" ; require_once $_SERVER['DOCUMENT_ROOT'] . "/home/includes/libs/User.php" ; ?>

I have commented that line, because I think composer automatically loads it. Question 1, Am I Rigth? (because phpunit get automatically recognized inside my Test class...)

Test1.php

<?php

class Test1 extends PHPUnit_Framework_TestCase
{
  public function testSomething()
  {
    // $something = getColNameByStatusId(1);
    $this->assertEquals(1,2);
  }
}
?>

phpunit.xml

<phpunit bootstrap="includes/functions.php" colors="true">
    <testsuite name="Test1" >
        <directory>./tests</directory>
    </testsuite>
</phpunit>

Then I Execute phpunit in command line

My functions.php works fine in my code, of course with no composer integration, but when It's loaded with phpunit it 'breaks', I get the following error:

Warning: require_once(/home/includes/libs/table_classes/User.php): failed to open stream: No such file or directory in C:\wamp\www\home\includes\functions.php on line 18

I think I'm missing the 'loading' stuff for phpunit. My code doesn't use namespaces and PSR-0 neither PSR-4.

Question 2- How to properly load files in this case?

My goal is to load functions.php then it will load all other 'table' classes for doing my tests

SendKeys to a windows file dialog

I want to send the string ABC to the input field of a windows file dialog. With this code line I can set the focus to the correct element.

var filedialogOverlay = drv.SwitchTo().ActiveElement();

But the following code doesn't write the string into the element.

Thread.Sleep(1000);
filedialogOverlay.SendKeys("ABC");

What kind of black/white/gray box tests can I conduct on operating systems on servers?

I'm about to get started on doing some penetration tests on a couple of servers and I did some advanced reading on the main categories of tests I can conduct:

  1. Kernel
  2. Memory
  3. File IO
  4. System Configuration

Are these right? But what I was really wondering is how exactly should black box penetration tests be carried out in those categories above? The only ones I can think of at the moment are like DOS, or MITM.

Testing Angular 2 service that returns a Promise

I'm trying to unit test a service which should use fake http backend, provided by Angular's in-memory data service. This is the relevant code:

describe('getCars() method ', ()  => {
      it('should return a resolved Promise', inject([DataService], (service: DataService) => {
        service.getCars().then((value) => {
          expect(value.length).toBe(3);
       });
    }));
  });

The problem is I can't use Jasmine's done callback to treat the asynchronous service.getCars() call, because of how inject function works. I can't use async test helper neither, because it can't work with promises. So I have no idea how to wait for promise to return---the test just runs without ever reaching expect.

mardi 29 novembre 2016

What is Mocha Chai? & How i work with them?

I am new in this development, so i wanted to know about what is Mocha Chai? and how i work with them?

If you have any demo that combined this two please share with me

How to mock Environment Interface

I have to fetch path from the property file using Environment interface. In Junits I am not able to mock the Environment Interface. Below is my code. I want something random if I will call the method mentioned. How can I do it?

@Mock private Class object; @InjectMocks Class2 object2;

Mockito.when(object.getFilePath()).thenReturn("Random String");

Trying to put together a few assertion functions and i cannot get a try with to work

I am just learning F#, so i am trying a few things out(i do know could just use XUnit or something else)

I have the following assertion method, and the idea is that it should take an expected exception and the function that it expects to throw this exception, then execute the function and inside of the with test if the exception thrown is the same as the expected one.

let assertException (testName : string) (expected : 'a when 'a :> Exception) functionToBeTested =
    try
        functionToBeTested
        (false)
    with
    | :? Exception as someException when someException :? expected ->
            printTestResultInMiddle (sprintf "Test: %s PASSED: Raised expected exception %A" testName expected) true 
            (true)
    | _ ->
        printTestResultInMiddle (sprintf "Test: %s FAILED: expected exception %A" testName expected) false
        (false)

It gives me the error Unexpected symbol '(' in pattern matching. Expected '->' or other token. in the line where i try to call a print method. Shouldn't i be able to treat this try ... with as a

match ... with

??

And another question, could i do this alot easier or?

Shoulda with Rspec Gem returning "undefined method `reflect_on_association' for String:Class" in belong_to test

In my rails app, I have my models Request, Service, and ServiceRequest

In my models.rb files I have:

request.rb:

class Request < ApplicationRecord

  validates_presence_of :userid, :supervisor, :status 

  has_many :servicerequests, dependent: :destroy
  accepts_nested_attributes_for :servicerequests

end

service.rb:

class Service < ApplicationRecord
  validates_presence_of :title, :responsible

  has_many :servicerequests, dependent: :destroy
end

servicerequest.rb:

class Servicerequest < ApplicationRecord
  belongs_to :request, optional: true
  belongs_to :service, optional: true
end

and the spec causing issues servicerequest_spec.rb:

require "rails_helper"

describe "ServiceRequests", :type => :model do 
  it "is valid with valid attributes"
  it "is not valid without a userid"
  it "is not valid without a request_id"
  it "is not valid without a service_id"

  it { should belong_to(:request)}
  it { should belong_to(:service)}
end

these two lines specifically:

it { should belong_to(:request)}
it { should belong_to(:service)}

I'm getting the error:

     NoMethodError:
   undefined method `reflect_on_association' for String:Class
 # /Users/criva/.rvm/gems/ruby-2.3.1@onboard/gems/shoulda-matchers-2.8.0/lib/shoulda/matchers/active_record/association_matchers/model_reflector.rb:21:in `reflect_on_association'
 # /Users/criva/.rvm/gems/ruby-2.3.1@onboard/gems/shoulda-matchers-2.8.0/lib/shoulda/matchers/active_record/association_matchers/model_reflector.rb:17:in `reflection'
 # /Users/criva/.rvm/gems/ruby-2.3.1@onboard/gems/shoulda-matchers-2.8.0/lib/shoulda/matchers/active_record/association_matcher.rb:825:in `reflection'
 # /Users/criva/.rvm/gems/ruby-2.3.1@onboard/gems/shoulda-matchers-2.8.0/lib/shoulda/matchers/active_record/association_matcher.rb:993:in `association_exists?'
 # /Users/criva/.rvm/gems/ruby-2.3.1@onboard/gems/shoulda-matchers-2.8.0/lib/shoulda/matchers/active_record/association_matcher.rb:926:in `matches?'
 # ./spec/models/servicerequest_spec.rb:10:in `block (2 levels) in <top (required)>'

I realize that shoulda doesn't have optional built in yet, but I wish to figure out a way to test it, yet keep it there.

Any help would be great in solving this mistery.

I have attempted it { should belong_to(:request).optional(true)} and it { should belong_to(:request).conditions(optional: true)} to no avail.

Elixir pry session interrupted because database connection timed out

I was happily following this advice on how to run a pry debugger inside my Phoenix controller tests:

  • require IEx in the target file
  • add IEx.pry to the desired line
  • run the tests inside IEx: iex -S mix test --trace

But after a few seconds this error always appeared:

16:51:08.108 [error] Postgrex.Protocol (#PID<0.250.0>) disconnected: 
** (DBConnection.ConnectionError) owner #PID<0.384.0> timed out because 
it owned the connection for longer than 15000ms

As the message says, the database connection appears to time out at this point and any commands that invoke the database connection will error out with a DBConnection.OwnershipError. How do I tell my database connection not to time out so I can debug my tests in peace?

Telerik Drop Down List

Here's the situation:

I am testing a form using Telerik Test Studio.

There are 3 drop downs (State, City, Building) and what you choose on the previous box decides what is in the next box. Before you choose what is in the drop down box number one, two and three are empty (no choices). When you choose a State, it gives you options for cities, then once you choose a city, the building drop down gets populated with options.

The problem is, when I am testing this with the drop downs, it chooses the state correctly, but it, for some reason, doesn't register that there is a state chosen, so the city drop down does not get populated. I slowed the process down while the test is running to try and see if it was something simple, but I saw myself that it DOES choose a state, so I tried to manually choose the city but there was nothing to populate the drop down options. But, if I choose the state myself, then the cities populate.

Thank you for your help.

M

Testing RxJava Subscribers in Android

I am writing an Android app adhering to the MVP design pattern and using RxJava to make network calls.

My Presenter layer is responsible for making a network call (via an injected network layer object) and then setting the state of the view before and after the network call (i.e. show a progress bar then hide the progress bar).

The Presenter contains a Subscriber and because of that I'm wrestling with the best way to write unit tests for the Presenter class.

As I see it there are two options:

  1. One test class will assert that the Presenter interacts with the view correctly before the Observable is subscribed to. A different test class would be used to test just the Subscriber and would assert that the view is interacted with correctly after the network call completes. In other words, one test class would test the Presenter and another test class would test the Subscriber and the Presenter test would test that things behaved correctly pre-subscription and the Subscriber test would test that things behaved correctly after post-subscription.

  2. A single test class is used that asserts that the view is interacted with correctly before the Observable is subscribed to and then the Observable is mocked so that it can return different responses (onError(), onNext(), etc...) and the test can verify that the view was interacted with correctly when the Observable is subscribed to. In this case there is just one test class that tests everything.

I like option 1 because it separates testing the subscriber separately from the presenter which "feels" right. However I like option 2 because once you've written your test class you're done and you've covered every case.

Are there better ways to do this than I've suggested? Any arguments for option 1 vs option 2 vs a completely different way?

Using google test on windows with MSVC 2015

I have spent far too long trying to get GoogleTest working using MSVC 2015 so I'm hoping you clever SO guys can give me a hand.

What I've done:

  1. Cloned the GoogleTest github repo to my machine.
  2. Used CMake to generate MSVC project files. (I originally used the project files that come with the checkout, only to later find out after some searching that these do not appear to be complete and the CMake generated ones have apparently the correct defines etc

I can see that the sample tests compile fine in the CMake generate projects. However, on the project I have created for my own tests this is not the case. I've looked up just about every SO thread I can find and any other nuget of info... I have also made sure that between the CMake sample tests projects and my own that all the compiler and linker options are identical, so I'm at a total loss.

In my project I get the following compilation errros

Severity    Code    Description Project File    Line    Suppression State
Error   C2440   '<function-style-cast>': cannot convert from 'initializer list' to 'testing::internal::AssertHelper'    TestMpegMessing C:\Users\James\Documents\Git\mpeg_ts_messing\gtest\src\gtest_binary_buffer.cpp  6   
Error   C2065   'gtest_ar': undeclared identifier   TestMpegMessing C:\Users\James\Documents\Git\mpeg_ts_messing\gtest\src\gtest_binary_buffer.cpp  6   
Error   C2589   'switch': illegal token on right side of '::'   TestMpegMessing C:\Users\James\Documents\Git\mpeg_ts_messing\gtest\src\gtest_binary_buffer.cpp  6   
Error   C2181   illegal else without matching if    TestMpegMessing C:\Users\James\Documents\Git\mpeg_ts_messing\gtest\src\gtest_binary_buffer.cpp  6   
Error   C2228   left of '.failure_message' must have class/struct/union TestMpegMessing C:\Users\James\Documents\Git\mpeg_ts_messing\gtest\src\gtest_binary_buffer.cpp  6   
Error   C2059   syntax error: '::'  TestMpegMessing C:\Users\James\Documents\Git\mpeg_ts_messing\gtest\src\gtest_binary_buffer.cpp  6   

Has anyone had a similar problem? If not, I could do with a few tips on how on earth to further debug this.

CodedUI: Logging inside AssemblyCleanup

We have a Coded UI test framework that allows test authors or result analyzers to mark a test as failing to a specific bug. Basically, if a test is marked this way, we store the test name and the bug it fails to in our TestContext object, and eventually at the end of the run we want to output a list of tests that failed to bugs.

We had been printing this information on a test-by-test basis using TestCleanup, but this leads to confusion. As more tests executed inside the run, the list would increase when it came across expected failures, and later tests would output more data than previous tests. For example, if I ran 10 tests, and test 2 and 8 failed to an expected bug, the first result log would omit that section entirely, the 2nd through 7th result logs would show one expected failure, and the 8th through 10th would show two expected failures. In other words, some would have a section that looked like this:

Test A failed to Bug 1

And some would have that same section that looked like this:

Test A failed to Bug 1

Test B failed to Bug 2

This made analyzing a run confusing, since the true list of expected failures would show up only on the last test that ran, whatever that one happened to be.

So doing this in TestCleanup is impractical, and I don't want to do it in ClassCleanup because we have a ton of classes that hold our tests -- it's our primary method of organizing tests. That leaves me with AssemblyCleanup.

I got AssemblyCleanup to execute, and I can call logging code in it. However, I can't find the logged output anywhere. If I run from the command line using vstest.console.exe and generate a .trx file, the run details view doesn't show me any logged output. If I run a test individually within Visual Studio, I don't even get a run details view. And the AssemblyCleanup logging output doesn't get put into individual test logs, as I would expect.

Ultimately my question is this: is there any way to log information inside the AssemblyCleanup method of a Coded UI test and view it somewhere, or is that a completely pointless thing to do?

test suite not overwrite DefaultSharedPreferences

I am using the SettingsActivity created by AndroidStudio (extends AppCompatPreferenceActivity, which extends PreferenceActivity). My preference values are getting stored in the DefaultSharedPreferences (pkg-name_preferences.xml).

I'd like to be able to specify a different name for the DefaultSharedPreference file when the test suite is running. This would prevent the test suite from over-writing any preference values I may have set during normal use of the app.

Is it possible to do this? (Just to be clear: I am able to detect if the test suite is running, but I don't know how to specify a name for the DefaultSharedPreferences.)

I found some old posts that suggest:

PreferenceManager prefMngr = getPreferenceManager();
prefMngr.setSharedPreferencesName("my_name");

getPreferenceManager() was deprecated in API 11, and if I try to use it anyway it returns null.

Or maybe there is some other way to achieve my objective (test suite not over-writing app's preference values)?

agency web dev's : how do you create & test responsive design

I'm interested in agency web dev's in particular as they seem to have tight time budgets.

The speicific things I'm interested in are

  • do you only try and make your design look good in a selection of breakpoints? if so what are they?
  • how do you go about responsive testing, any tools or is it all manual?
  • how to do you handle device testing?
  • do you find methodologies like bem etc a time saver or reducer in regressions?

Xamarin Forms Unit test GPS

I'm currently working on creating unit tests for our application and it's highly focussed on GPS data (it's a route tracking app). Is it possible to fake GPS data within Xamarin's unit tests that also works on the Xamarin Test Cloud for both iOS and Android?

Count method invocations in Spring Service used by Bean

Suppose I have the following bean:

@Bean
public void consumer() {

   while(true) {
      String message = //blocked waiting for message to consume
      myService.processMessage(message);
    }

}

That uses the following service:

@Service
public class MyService {

    public void processMessage(String message) {

        //process the message

    }

}

which is autowired throughout the rest of the application. In my testing class I also autowire the service:

public class MyTest {

   @Autowired
   private MyService myService;

   (...)
}

Now, when the application is running and/or being tested by the MyTest class the consumer waits for messages and processes them with MyService.processMessage method. What I need is for MyTest to be able to count how many invocations are performed to MyService.processMessage method. Ideally, I would like to be able to do something like:

public class MyTest {

  @Autowired
  private MyService myService;

  @Before
  public void intersect() {

      ProxyFactory pf = new ProxyFactory(myService);
      pf.addAdvice((MethodInterceptor) invocation -> {
      if(invocation.getMethod().getName().startsWith("processMessage")) {
        //Do my count
      }
      return null;
    });

  }

}

Unfortunately, the above code excerpt does not work because the myService instance is autowired.

My question is: Does anyone know how to implement something in line with the latter code excerpt that actually works? Thank you in advance.

how can i click on another element, when original was not found?

I've created a function to select the payment randomly (number). when the first selection (number) is not found (try catch) then I want to select another one.

But I always receive:

Failed: Cannot read property 'click'

when sending number 8 and it should select credit cart visa...

What am i doing wrong? The elements are really there and the id is also correct:

CheckoutPage3.prototype.selectPayment = function (number) {
    if (number == 1) {
        try {
            this.elementPaymentPaypal.click().then(function () {
                return 1;
            }, function (err) {
                console.log("payment not found, select new");
                this.elementPaymentCreditCard.click();
                this.elementCreditVISA.click();
                number = 2;
            });
        }
        catch (err) {
            console.log('error occured');
        }
    }
    else if (number == 2) {
        this.elementPaymentCreditCard.click();
        this.elementCreditVISA.click();
        number = 2;
    } else if (number == 3) {
        this.elementPaymentCreditCard.click();
        this.elementCreditMasterCard.click();
        number = 3;
    }
    else if (number == 4) {
        try {
            this.elementPaymentCreditCard.click();
            this.elementCreditAmericanExpress.click().then(function () {
                number = 4;
            }, function (err) {
                console.log("payment not found, select new");
                this.elementCreditVISA.click();
                number = 2;
            });
        }
        catch (err) {
            console.log('error occured');
        }
    }
    else {
        try {
            this.elementPrePayment.click().then(function () {
                number = 5;
            }, function (err) {
                console.log("payment not found, select new");
                this.elementPaymentCreditCard.click();
                this.elementCreditVISA.click();
                number = 2;
            });
        }
        catch (err) {
            console.log('error occured');
        }
    }
};

How to sign old iOS version

I would love to be able to test my app against older iOS version but unfortunately I upgraded my iPhone to iOS 10...

When going in the downloads section of the Apple website I can't find older iOS images anymore, so I can't downgrade.

This is old news.

Is there a way to sign unsigned IPSW files so that they can be installed on an iPhone without bricking them?

This is so inconvenient. Can't really understand why they did this.

Import multiple http requests with SOAP code to jmeter

I try stress Service with SOAP used in HTTP Requests/Body Data. Problem is: I have file with 50K line where I can find SOAP and SoapAction.

I would like to import CSV with: 1. SOAP to Body Data in Http Request 2. SOPAAction to HTTP Header Manager related to http request

Is there any chance that jmeter can read file and to this job in one http request using parameters ? I don't want change JMX configuration to inject this directly to plan. I need do this for file with 50K requests, 100K requests, 200K requests etc.

Save Web Service Response as XML

I'm working with HP UFT nad HP ALM. I'm trying to save a web service response as xml in an API-Test. I found many tutorials to save a response in an Excel but it's not what i want. Is this possible. If not, is it possible to download an .wsdl response.

API-Test construction

1) Control if there exist an specific xml file(refernce.xml) on my computer
2) If not then stop the test/if the reference.xml exist then keep going
3) Now I want to save the response
4) Compare the 2 files(reference.xml and downloaded xml)

Error executing Calabash iOS tests Xamarin Test Cloud

I've various tests written in Calabash using Ruby. I've tried the tests locally on the simulator and on a physical device and there wasn't any issues but, when I try to execute them on the Xamarin Test Cloud it throws an error relative to the Calabash launcher. I've changed the 01_launch.rb file several times and the error persists. This is the log file:

{"type":"install","last_exception":{"class":"XTCOperationalError","message":"500\n{\"message\":\"undefined method `strip' for nil:NilClass\",\"app_id\”:\”XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\"}","backtrace":null}} (XTCOperationalError)
./xtc-sandbox-runtime-lib/xtc/formatter/patches/calabash_common_patch.rb:64:in `raise_operational_error'
./xtc-sandbox-runtime-lib/xtc/formatter/patches/calabash_ios_patch.rb:223:in `xtc_install_app'
./xtc-sandbox-runtime-lib/xtc/formatter/patches/calabash_ios_patch.rb:129:in `relaunch'
./features/support/01_launch.rb:98:in `launch'
./features/support/01_launch.rb:340:in `Before'

I've been searching about this issue but I don't find anything related. I'm trying to test the platform in order to use it in a CI environment for a large iOS and Android app. Any help would be welcome. Thanks. P.S: The app_id is omitted, in the tests the real app_id is setted correctly.

How to check if an HTML5 validation was triggered using phantomjs?

I am testing my ember app(ember 1.6) using phantomjs. I want to assert that HTML 5 validation is triggered for invalid input. Here is my test:

fillIn('#MyInputField', 'some invalid data');
click('#MyButton');

andThen(function() {
    strictEqual(find('#MyInputField:invalid').length, 1, 'Expected HTML 5 validation triggered!');
});

This works fine when test it using karma when testing in browser. But when testing in phantomjs this fails. I have made screenshot and according to that image there is no HTML 5 validation.

Testing on Chrome with flash

I'm doing automated tests with Chrome.

Part of my suite needs flash to be activated (sight...).

I'm struggling to activate it. Here's what I got:

If I open a regular chrome session, with Flash checked in chrome://plugins, it works there:

Flash checked

When the tests start, the command line executed is:

▶ ps -edf | grep chrome
augustin 24752 24743  2 12:12 pts/0    00:00:07 /opt/google/chrome/chrome --user-data-dir=/tmp/karma-22735678 --no-default-browser-check --no-first-run --disable-default-apps --disable-popup-blocking --disable-translate --disable-background-timer-throttling http://ift.tt/2gELs6M

I always get Download failed or sometimes flash version outdated.

Download failed

Even if I check flash in plugins and reload, even if I allow it specifically for this tab and refresh:

Flash allowed

I tried several command line options:

  • --always-authorize-plugins from there
  • --enable-plugins from there

without sucess. :(

Thanks for the help

Test data is saved on refresh - karma

I'm writing a Javascript application and need to test that an image stays 'selected' after the page is refreshed. I am using Karma with Jasmine.

If I call window.location.reload(); in my test though I get the error Some of your tests did a full page reload!

Is there a way I can simulate a page reload or a different testing approach I should be taking?

Thanks

How to assert dom change in ember app which has been executed in run loop?

I am writing test for ember app written in ember 1.6.

Inside controller I have a function executed on promise success:

var me = this;

function onSuccess(result) {

    printSuccessMessage();

    Ember.RSVP.all(promises).then(function(value) {
        Ember.run.later(this, function() {
            clearMessages();
        }, 5000);
    });
}

then inside test I am trying to assert that success message appears:

    fillIn('#MyInputField', 'Some text');
    click('#MyButton');

    andThen(function() {
        strictEqual(find('[data-output="info-message"]').text().trim().indexOf('Done!') >= 0, true, 'Expected success message!');
    });

But the problem is that after click andThen is waiting for run loop to empty. So after this click andThen waits 5 seconds and then execute assertions. In that moment clearMessages() is already executed and message div is cleared so test fails.

Any idea how to assert that this message has certain text?

Testing push notifications generated by cloud

I have a cloud site, and also a native ios and android App.

I have lots of automated test cases for all 3 platforms, but I would like to know if there's a possibility I can test from cloud, if push notifications are received.

So my test should be: 1. From cloud generate a push notification 2. From mobile verify this push has arrived 3. From cloud disable push notification 4. From cloud generate a push notification 5. From mobile verify this push does not arrive

I've been looking into it but I could not find any tool, framework or someone who has been facing the same problem as me.

Hope someone can help

Thanks! PS: We are using selenium & ruby for cloud automation tests

Testing App State Saving/Restoration on Device without Xcode Connected

I am saving and restoring my app's state by opting in (in the AppDelegate):

func application(application: UIApplication, shouldRestoreApplicationState coder: NSCoder) -> Bool {    
    return true
}

This works when I kill the app with the Xcode stop execution button after the app has been backgrounded.

But when I try to kill the backgrounded app on the device (by double tapping the home button and swiping the app up off the screen), then the app state is not restored when relaunched.

How can I quit out of the app on the device so that it will restore its' state? Or is it now only possible to test it using Xcode?

I'm getting the following error when I run my tests on the Xamarin Test Cloud:

System.IO.FileNotFoundException : Could not load file or assembly 'System.Net.Http, Version=1.5.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine] (System.Runtime.CompilerServices.TStateMachine& stateMachine) <0x688fce0 + 0x000ff> in :0 at System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1[TResult].Start[TStateMachine] (System.Runtime.CompilerServices.TStateMachine& stateMachine) <0x688fcb8 + 0x00017> in :0 at fieldapp.UITests.TestHelper.getListOfFields () <0x688fb20 + 0x0011f> in :0 at fieldapp.UITests.Tests.AreOnlyVisibleFieldsOnNewDocumentScreenDisplayedTest () <0x6887be8 + 0x0036b> in :0 at (wrapper managed-to-native) System.Reflection.MonoMethod:InternalInvoke (System.Reflection.MonoMethod,object,object[],System.Exception&) at System.Reflection.MonoMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) <0x323a110 + 0x00093> in :0

When I run the tests locally on the emulator, they work fine

But when I run them on the Test Cloud, they fail with the above message.

Does anyone know the solution?

lundi 28 novembre 2016

Automation testing tool to be used for eclipse 3.X application

Can anyone suggest what automation testing tool to be used to develop automation framework for testing Eclipse 3.X application apart from SWTBot? To be precise I want to test an eclipse 3.X plugin and not 4.X

Testing React PropTypes with sinon

As an example I used Make React PropType warnings throw errors with enzyme.js + sinon.js + mocha.js.

I have a React component with one required prop:

class Pagination extends Component {
    render() {
        return (
            ... render some stuff
        );
    }
}

Pagination.propTypes = {
    total: PropTypes.number.isRequired
};

And this is test for it:

describe('(Component) Pagination', () => {
      before(() => {
         sinon.stub(console, 'error', (warning) => { throw new Error(warning) })
      })
      after(() => { console.error.restore() })

      it('render fails without props', () => {
          shallow(<Pagination />);
      });

      it('render fails without props2', () => {
        shallow(<Pagination />);
      });
    });

After running that tests first one crashes, but second - not. Tests are similar. I think that the problem is that React throws warning messages only once. How to avoid this?

I want to have 2 tests: one that will be crashed when no props is set, and second works fine with props.

How to perform unit testing

What is unit testing? How to perform unit testing.

Unit testing is a software development process in which the smallest testable parts of an application, called units, are individually and independently scrutinized for proper operation. Unit testing is often automated but it can also be done manually. is dis correct

Stubs and proxquire

I'm having an issue with stubbing a particular function with sinon after having used proxyquire.

Example:

// a.js
const api = require('api');

module.exports = (function () {
    return {
        run,
        doStuff
    };

    function run() {
        return api()
            .then((data) => {
                return doStuff(data);
            })
    }

    function doStuff(data) {
        return `Got data: ${data}`;
    }
})()

// a.spec.js - in the test
a = proxyquire('./a', {
    'api': () => Promise.resolve('data')
})
sinon.stub(a, 'doStuff');
// RUN TEST - call a.run()

I know it isn't working because it calls the original c instead of a mocked/stubbed c.

Scroll the cells using UI Testing

Is there a method like

- (void)scrollByDeltaX:(CGFloat)deltaX deltaY:(CGFloat)deltaY;

for iOS?

I think the above method is only for OSX.

I would like to scroll my tableview according to the deltavalues provided.

Thanks in advance.

First step with unit testing [python]

I want to write unit test for this code but I don't know how to start. I need to write unit test for all functions or some of them I can pass? In my opinion I write unit test for generate_id , add_note, remove_note and edit_note, is that enough?

import json

def get_notes_data(file_name):
    with open(file_name, 'r') as open_file:
        return json.loads(open_file.read(), encoding='utf-8')


notes = get_notes_data('notes_data/notes.json')

def print_column(column_name):
    list_of_notes = [note for note in notes if note['board'] == column_name]
    return [note['message'] for note in notes if note['board'] == column_name]

#lista = print_column('to do')

def print_data():
    all_columns = [note['board'] for note in notes]
    columns = set(all_columns)
    for column in columns:
        print column
        print '\n'
        print print_column(column)
        print '\n \n'


def generate_id():
    max_id = 0
    for note in get_notes_data('notes_data/notes.json'):
        if note['id'] > max_id:
            max_id = note['id']
    return max_id + 1

def save_notes(file_name, notes):
    with open(file_name, 'w') as notes_file:
        json.dump(notes, notes_file, indent=4)

def add_note(column_name, note_message, notes):
    note_data = {"board" : column_name, 
            "message": note_message,
            "id": generate_id()}
    notes.append(note_data)
    save_notes('notes_data/notes.json', notes)
    return note_data


def remove_note(note_id, notes):
    for note in notes:
        if note['id'] == note_id:
            notes.pop(notes.index(note))
    save_notes('notes_data/notes.json', notes)


def edit_note(note_id, message, board, notes):
    changed = False
    for note in notes:
        if note['id'] == note_id:
            note['message'] = message
            note['board'] = board
            changed = True
    if not changed:
        raise IndexError('Index {0} does not exist'.format(note_id))
    save_notes('notes_data/notes.json', notes)

Testing a function argument which is a function

I'm working on testing some code but I'm having some trouble with sinon. The thing is that one of my functions takes a function as a parameter and I haven't found how to mock it.

Usually you do something like this:

var get = sinon.stub($, 'get')

Then later after using $.get:

sinon.assert.calledWith(get, expectedObject);

My code is as follows:

function getUsers(usersPromise) {
    const config = { date: new Date() };
    return usersPromise(config)
        .then(function (data) {
            // Do stuff
        })
}

What I want to do is to be able to mock usersPromise. So I would check that it was called with the correct config object (I omitted plenty of values) and then also assert some stuff in the .then function.

sinon.stub(usersPromise) won't work so I'm a bit lost.

How can I run CodedUI test on the real WP10 device?

I have WP10 (OS version:1607, build:14393) device connected by usb to my PC. In CodedUI test project I set "Device" in the device toolbar and run or debug the test, but nothing happens. Only load process of executing test is present, then process stops, test does not start at all. How can I run CodedUI test on the real device?

Wrong folder created by Jasmine2 and Protractor

I've used Protractor - 4.0.11 Jasmine - 2.5.2 and conf:

onPrepare: function() {
  jasmine.getEnv().addReporter(
    new Jasmine2HtmlReporter({
      savePath: 'target/reports',
      screenshotsFolder: 'target/screenshots',
      fixedScreenshotName: true,
    })
  );}

the HTML report is created and looks like: Screenshot

The link for screenshot in html code of report:

<img src="target/screenshots/should-divide-four-and-two.png" width="100" height="100">

but the created path is: "target\reportstarget\screenshots"

I don't know why Jasmine adds the name form savePath "reports" in here. When code was:

onPrepare: function() {
  jasmine.getEnv().addReporter(
    new Jasmine2HtmlReporter({
      savePath: 'target/screenshots',
      fixedScreenshotName: true,
    })
  );}

folder path was: report - /target/screenshots screenshots - /screenshotsscreenshots

Somebody knows how to change it?

How to test asynchronous calls in Android as blackbox with Mockwebserver

I'd like to write an integration test to test one asynchronous part of my system as a black box and I'm trying to use mockwebserver for this task.

My problem is that if I don't know how to wait for the callback without blocking the main thread:

As a simplified example of what I need:

Imagine the following interface provided by one class:

public class Checker {

    interface Callback {
        public void onStatusOK();
        public void onStatusFail();
    }

    void checkStatus(String param, Callback callback){
        /* Send HTTP request and call the callback with the result*/
    }
}

At some point after calling checkStatus the app should send an http request to http://ift.tt/2gyisjF and call to the callback (in the main thread) with onStatusFail() if the http status code of the request is between 500 and 599, else it should call to onStatusOK().

So my test is:

@Test
public void testStatusCallback(){
   CountDownLatch countDownLatch= new CountDownLatch(1);
   MockWebServer server = new MockWebServer();
   // Schedule some responses.
   server.enqueue(new MockResponse().setBody("OK").setResponseCode(200));
   // Start the server.
   server.start();

   NetworkingClass.setBaseUrl(server.url("/"));

   Callback spyCallback= spy(new Callback() {
            public void onStatusOK(){
                 countDownLatch.countDown();
            }
            public void onStatusFail(){
                 countDownLatch.countDown();
            }
  });

   mChecker.checkStatus("test",spyCallback);
   countDownLatch.await();

   verify(spyCallback).onStatusOk();
   server.shutdown();

}

But since the callback is received in the main thread the call to countDownLatch.await(); block it and the tests gets blocked.

How can I solve this?

emberJS Testing when using loading route

When no loading route or a loading template is used, all my tests pass but as soon as I add a loading route, some of my tests fail.

how could I integrate the loading route in my tests ?

Laravel ignores testing database connection

I use Laravel 5.3.22 and want to unit-test my application using an in-memory sqlite database migrating/rolling back for every test as it's said here. This is the connections section my database.php config:

'mysql' => [
            'driver'    => 'mysql',
            'host'      => env('DB_HOST', 'localhost'),
            'database'  => env('DB_DATABASE', 'forge'),
            'username'  => env('DB_USERNAME', 'forge'),
            'password'  => env('DB_PASSWORD', ''),
            ...
        ],

        'testing' => [
            'driver'   => 'sqlite',
            'database' => ':memory:',
            'prefix'   => '',
        ],

This is the phpunit env config:

<php>
    <env name="APP_ENV" value="testing"/>
    <env name="CACHE_DRIVER" value="array"/>
    <env name="SESSION_DRIVER" value="array"/>
    <env name="QUEUE_DRIVER" value="sync"/>
    <env name="DB_CONNECTION" value="testing"/>
</php>

The phpunit config implies that laravel shoud use the "testing" sqlite connection for testing, but it doesn't care and go on with the primary mysql connection. This is not an option, I have a big and complex schema and it can't be used for unit-testing with mysql. Who do I proceed? I'm just starting with testing and new here. It seems that unit testing of database-based applications makes no sense, it dramatically slows down the process. And I can't mock the query builder since I need to assert on it's results.

Test second activity without logging in every time

User logs in or registers in first activity, then a new activity is displayed, which I am working on. Can I access this second activity in emulator or device without adding dummy user data to the code? I created an Espresso test to test login, but when the second activity is succesfully opened, test is passed and app logically closes. Can I avoid to manually login every time to get to second activity?

Why rails 4 tests does not reset db

I'm using Ruby 2.3.0 and Rails 4.2.7. I'm also using minitest ( ActionController::TestCase ) and these gems

gem 'capybara', '~>2.7.1'
gem 'poltergeist', '~>1.9.0'
gem 'minitest-rails-capybara', '~>2.1.2'

I was under the impression that everytime 'rake test' is called, the database is dropped and created with schema.rb (as it happens in all my other projects)

But with this project it doesn't do this (If i manually change something on the test db, it stay that way between test)

Is this normal? What could be the cause?

Is there a way to start a Play! application before starting tests?

I want to do e2e testing on my Play! application.
So far I was starting the Play! application "by hand" and I launch the tests after. I would like to know if there is a way to do that directly with sbt (in order to have to run only one command).

I have seen this in the sbt documentation :

testOptions in FunTest += Tests.Setup(() => println("Setup")),
testOptions in FunTest += Tests.Cleanup(() => println("Cleanup"))

Is there a way to use this in order to start the Play! application (and wait for it to be ready) and stop it in the Cleanup part?

Maybe by starting an external bash script?

What is the best way to check if a website is using all the JS code included?

I am working on a big website with lots of legacy code in it and I really would like to do some clean up as I know for sure that many JS libraries are included, but not used anymore. Obviously there is also custom code. Unfortunately they do not have any functional testing in place, no documentation and the team'members are all pretty new and no one seems to know anything, a nightmare.

I really do not understand how people can do something like that. There is really no respect or "programmers-code" like: do not leave shit behind you because another programmer like you will have to deal with it after!!!

Our job is awesome and I feel lucky to do it, but lack of respect makes me crazy.

How would you go about it? Do you know any tool that could help in this situations?

How to randomize or change order of tests with Karma?

I have a set of tests and I suspect they can change the behaviour because of mistakenly shared state.

I'd like to change the order to put the suspicious test on the top or randomize the order completely to rune tests in a different order each time.

How do I make this scenario work?

dimanche 27 novembre 2016

IntelliJ throwing exception when creating Class Level Watch with stream().filter()

When creating a class level watch for the debugger like this

entities.stream().filter(entity -> "VALUE".equals(entity.getValue()))

Where entities is a List I get "Error creating evaluation class loader: java.lang.NullPointerException.

This does not happen if i create a "normal" watch like:

container.getEntities().stream().filter(entity -> "VALUE".equals(entity.getValue()))

Entities is not null neither are any of the entities in the list. The code is debugged during testing.

I cannot find any more information in Intellij.

Ideas where I can check on that?

JUnit cannot find ApplicationContext

So was using a tutorial here http://ift.tt/2g9eKMO and I keep getting an error when I run as a JUnit test saying it can't find application context. The test class:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration("/testApplicationContext.xml")
public class CarTest {

is this and I tried putting the testapplicationcontext.xml in like every folder so far. It used to be

 @ContextConfiguration(locations={"/testApplicationContext.xml"}) but I thought if I change it, it might fix it.

Any suggestions?

What are the scenarios we cannot automate using selenum webdriver?

Scenarios that cannot be automated using selenium web driver, example: cap-ache etc., I need all those scenarios which cannot be automated..!!

Selenium script not working

I have created a task as admin and assigned it to writer.As a writer, I need to search the task and click on that particular task edit icon,I am unable to locate the task in the list.In one page 10 task is displayed,task may be in first page or 2nd page. Can somebody help me with selenium code for this. enter image description here

Running tests on code all on same webpage

I am working on a web application where on the page there are two text boxes/code editors: one for 'code', one for 'test code'. The point is for a user to click a button that says "run the tests", and have the code from the test code editor run against the code in the code editor, and for the results of tests to output on the screen. The point is to eventually end up with a code-wars for unit tests type application.

Thing is, I can't figure out the basic functionality of how to run the tests. I've only ever run tests by doing something like typing "npm test" in the terminal on my mac. I was thinking maybe using docker could work, but then I'm finding docker to be somewhat complicated.

Anyone have any basic tips on how I could, in a node.js -express-angular type web app run test code on other code that's coming from a couple of editors I've embeded into my web app?

SSRS report stress load performance test

Is it possible to do a performance (load/stress) test on an SSRS report? i.e to simulate concurrent users accessing and executing queries using the report. If yes, what tool can I use?

How can I generate keyboard inputs for Python's 'input' function?

I have a console program written in Python. I would like to test several input combinations in an automatic test routine. The input is read via Pythons input(...) function.

  • How can I emulate a keyboard or any other input stream to send single characters or strings to input?
  • Or do I need to replace input by another function, which is connected to my test cases?

Tic-tac-toe script testing issue in python

I've lately begun my adventure with coding in Python and had to write a script reporting match result in tic tac toe depending on the state of the matrix. So I've done it and it seems to work just fine, here is the code(in tic_tac_toe.py file):

possibleChars = ['X', 'O']
matrixSize = 0
sets = []
finalWinner = []


def state(board):


count = 0
    for line in board:
        whoWins(possibleChars, line, getColumn(board, count))
        count += 1
    whoWins(possibleChars, diagonals = getDiagonals(board), diagFlag = True)


def getColumn(board, index):
    col = []
    for row in board:
        col.append(row[index])
    return col

def getDiagonals(board):
    diagonals = []
    diag1 = []
    diag2 =[]
    count = 0
    for row in board:
        diag1.append(row[count])
        diag2.append(row[matrixSize-1-count])
        count += 1
    diagonals.extend((diag1, diag2))
    return diagonals


def checkIfMatrix(board):
    l = len(board)
    if all(len(x) == l for x in board):
        global matrixSize
        matrixSize = l
        return True
    else:
        return False

def checkSets():
    for n in sets:
        if 'X' in n and 'X' not in finalWinner:
            finalWinner.append('X')
        elif 'O' in n and 'O' not in finalWinner:
            finalWinner.append('O')

def checkWinner():
    if not finalWinner:
        return '.'
    elif len(finalWinner) is 2:
        return 'DRAW'
    else:
        if 'X' in finalWinner:
            return 'X'
        elif 'O' in finalWinner:
            return 'O'

def whoWins(charList, line = [], col = [], diagonals = [], diagFlag = False):
    global sets
    if diagFlag is False:
        sets.append([char for char in charList if (line.count(char) == len(line) or col.count(char) == len(col))])
    else:
        for diag in diagonals:
            sets.append([char for char in charList if diag.count(char) == len(diag)])

def main(board = ['...','...','...']):
    if checkIfMatrix(board):
        getDiagonals(board)
        state(board)
        checkSets()
        return(checkWinner())
    else:
        return False


if __name__ == "__main__":
    main()

The main() function returns: 'X' if X won, 'O' if O won, 'DRAW' if there was a draw(I know it is a little bit stupid, because there is no such possibility in this game, but it was said to do so in the content of the task), '.' if there was no winner and False if given board dimensions were wrong for playing tic tac toe game. I've checked it manually by giving many different matrices as an argument and it works as it should. But, the problem occures when it comes to running some automatic tests I wrote for this script. Here is the content of tests.py file:

import unittest
import tic_tac_toe


class TicTacToeStateTest(unittest.TestCase):
    """Tests tic_tac_toe.main."""

    def assert_result(self, board, result):
        self.assertEqual(tic_tac_toe.main(board), result)

    def test_no_winner(self):
        """No winner."""

        board_a = [
            "XO.",
            ".OX",
            ".X.",
        ]
        board_b = [
            "O..",
            ".X.",
            "..O",
        ]
        self.assert_result(board_a, '.')
        self.assert_result(board_b, '.')

    def test_x_won(self):
        """X won."""

        board = [
            "X..",
            ".X.",
            "..X",
        ]
        self.assert_result(board, 'X')

    def test_o_won(self):
        """O won."""

        board = [
            "..O..",
            "..O..",
            "..O..",
            "..O..",
            "..O..",
        ]
        self.assert_result(board, 'O')


    def test_invalid_dimensions(self):
        """Board has invalid dimensions."""

        board = [
            "XXOO.",
            "...X.",
            "OOOO",
            "...",
            ".....",
        ]
        self.assert_result(board, False)

if __name__ == '__main__':
    unittest.main()

The issue is that, all tests pass without any problem aside from the one testing if X won. I don't know why but the case with X raises an AssertionError:

...F
======================================================================
FAIL: test_x_won (__main__.TicTacToeStateTest)
X won.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "tests.py", line 35, in test_x_won
    self.assert_result(board, 'X')
  File "tests.py", line 9, in assert_result
    self.assertEqual(tic_tac_toe.main(board), result)
AssertionError: 'DRAW' != 'X'

----------------------------------------------------------------------
Ran 4 tests in 0.000s

FAILED (failures=1)

But what is more interesting, with other tests commented, only the one for "X won" left, everything is working as it should without any exception raised.

.
----------------------------------------------------------------------
Ran 1 test in 0.000s

OK

So, here comes my question - what is the reason it's happening so? What am I doing wrong? I will be really grateful for any response.

samedi 26 novembre 2016

What's meaning of "*=" in Nightmare? [duplicate]

This question already has an answer here:

This is a example in Github/Nightmare

var Nightmare = require('nightmare');
var nightmare = Nightmare({ show: true });

nightmare
  .goto('http://yahoo.com')
  .type('form[action*="/search"] [name=p]', 'github nightmare')
  .click('form[action*="/search"] [type=submit]')
  .wait('#main')
  .evaluate(function () {
    return document.querySelector('#main .searchCenterMiddle li a').href
  })
  .end()
  .then(function (result) {
    console.log(result)
  })
  .catch(function (error) {
    console.error('Search failed:', error);
  });

What's meaning of form[action"*=""/search"] in line 3?

Need advice about testing using Chimp.js/Mocha in Meteor.js

I'm trying to teach myself testing with Meteor but there is so much conflicting and outdated info online it's really difficult to work out what I need to do.

My current situation it that I have an application using the latest Meteor version (and the imports folder structure).

I've installed chimp globally and have created a /tests directory.

My first test is using chimp/mocha to fill in a form and try to insert something to the database. I'm also using the xolvio/backdoor package and running chimp like so

chimp --ddp=http://localhost:3000 --mocha --path=tests

Here's my test code:

describe('Chimp Mocha', function() {
    describe( 'Create a Client', function() {

        it( 'should fill in add client form', function() {
            browser.setValue('#clientName', 'Test')
                    .setValue('#clientEmail', 'test@test.com')
                    .selectByValue('#numberTeamMembers', '25')
                    .submitForm('#createClient')
        });

        it( 'should check the collections for new client data', function() {
            let getClient = server.execute( function() {
              return Clients.findOne({ name: 'Test' });
            });

            expect( getClient.name ).to.equal( 'Test' );
        });

        after( function() {
            server.execute( function() {
              let client = Clients.findOne( { name: 'Test' } );
              if ( client ) {
                Clients.remove( client._id );
              }
            });
        });


    });
});

This is throwing an error that Clients is undefined

However, if I add

import { Clients } from '/imports/api/clients/clients.js';

I get this error Error: Cannot find module '/imports/api/clients/clients.js'

What am I doing wrong? Should I be using chimp? Any help would really be appreciated because I don't find the Meteor guide very clear about this!

Thanks

Use composer to require package with tests included

I want to reuse test code (e.g. mock classes) from a package. But i don't know how to tell composer to fetch the dependency with tests included.

My composer.json:

"require": {
   "some/package": "2.0.0"
}

composer.json of the other package (which has /src and /tests subfolders):

"autoload": {
    "psr-4": {
      "Some\\Namespace\\": "src/"
    }
},
"autoload-dev": {
    "psr-4": {
      "Some\\Namespace\\Tests\\": "tests/"
    }
},

This gives me only the /src folder under /vendor/some/package/.

I tried specifiying some/package: 2.0.0@dev without any effect.

Is this even possible with composer (and packagist)?

Description according to "test-cloud.exe help submit":

--test-params [cspairs] - Additional test paramaters, format is comma-separated key:value pairs

I want to pass parameters to my tests by using that command.

Now how can I access these parameters inside the Tests.cs file?

If it's not possible, then is there any other way I can add parameters to my Test Cloud tests and then read them inside Tests.cs file?

vendredi 25 novembre 2016

How to separately run each test in karma-runner

I often get tests crush when they are running together. I mean when i use describe.only() for run single test block i have no errors. But when i run all tests together this test block shows errors. I suppose that it's happened because of merging all tests in single file tests.webpack.js. Is there any approach to run each test separately to avoid any influence?

Angular 2 SpyOn provider and inject into constructor

I have a component which is using the Router in the constructor:

constructor(
      private router: Router
  ) {
      router.events.subscribe((event: any) => {
        if( event instanceof  NavigationEnd ) {
          let url: any = event.urlAfterRedirects || event.url;
          switch (url) {
            case '/login':
            case '/forgotPassword':
              this.showHeader = false;
              break;
            default:
              this.showHeader = true;
          }
        }
      });
  }

The ng2 guide on testing provides an example where it adds a 'spy' just before a method calls navigate().

However, I would like to test the component constructor:

  beforeEach(() => {
    TestBed.configureTestingModule({
      declarations: [HeaderComponent],
      providers: [
        { provide: Router,      useClass: FakeRouter },
        { provide: AuthService, useClass: null }
      ]
    });
  });

  it('should be visible on /login', inject([Router], (router: Router) => {
    let routerEventSpy = spyOn(router, 'events')
        .and.returnValue(Observable.of(new NavigationEnd(1, '/login', '/login')));
debugger;
    let fixture = TestBed.createComponent(HeaderComponent);


  }));

I thought that since Router is a singleton, if I add a spy to it in the test, and then call TestBed.createComponent which constructs the component, the component would find the spy already attached. However this isn't the case; I get :

TypeError: Cannot read property 'subscribe' of undefined

How can I attach a spy before the construction of the component, bearing in mind that I might want to construct the component multiple times for multiple tests.

Devise password update testing

I'm trying to test my custom Registration Controller inherited from default Devise Registration Controller.

I created a method to allow user to updates their data without providing the current_password and updates the password providing the current password.

My custom controller

class Users::RegistrationsController < Devise::RegistrationsController
  ...

  def update_resource(resource, params)
    if params[:password].blank? && params[:password_confirmation].blank?
      resource.update_without_password(params)
    else
      super
    end
  end

end

My tests

require "rails_helper"

RSpec.describe Users::RegistrationsController, type: :controller do
  before do
    @request.env['devise.mapping'] = Devise.mappings[:user]
  end
  describe "POST new password" do
    let(:user) { FactoryGirl.create(:user, password: 'current_password', password_confirmation: 'current_password') }
    before do
      login_as(user, :scope => :user)
    end

    context "with an invalid password parameter" do
      it "renders :edit-template" do
        put :update, params: { id: user.id , user: { current_password: "current_password", password: "12", password_confirmation: "12" } }

        expect(response).to render_template(:edit)
      end
    end

    context "with a valid password parameter" do
      it "update user password in database" do
        put :update,  params: { id: user.id, user: { current_password: "current_password", password: "newpassword", password_confirmation: "newpassword" } }
        user.reload

        expect(user.password).to eq("newpassword")
      end
    end
  end
end

I received the follow errors

  1) Users::RegistrationsController POST new password with an invalid password parameter renders :edit-template
 Failure/Error: expect(response).to render_template(:edit)
   expecting <"edit"> but rendering with <[]>
 # ./spec/controller/users/registrations_controller_spec.rb:59:in `block (4 levels) in <top (required)>'

  2) Users::RegistrationsController POST new password with a valid password parameter update user password in database
 Failure/Error: expect(user.password).to eq("newpassword")

   expected: "newpassword"
        got: "current_password"

   (compared using ==)
 # ./spec/controller/users/registrations_controller_spec.rb:68:in `block (4 levels) in <top (required)>'

Any idea what am I doing wrong?

Mocking DbContext Entry

I am testing a function that contains this code:

void FunctionToTest() {
  if (context.Entry(entity).State == EntityState.Detached)
        {
            // [...]
        }

  // [...]
}

For this reason I created my own test context that works well but I am not able to mock DbEntityEntry<TEntity> Entry<TEntity>(TEntity entity) as the DbEntityEntry class has a constructor that requires an internal class.

Any solution for this?

With TestNG+Spring, how to make @Bean configuration local to just one test?

I have added a @Configuration annotation and @Bean definition to one of my TestNG tests in order to override a deep @Autowired object with a mock.

Problem is, this has messed up all my other integration tests. How can I make the @Bean configuration local to just the one test?

I already tried the @DirtiesContext(classMode=ClassMode.AFTER_CLASS) annotation but that didn't work this time (although I've used it successfully in other cases).

Test fails when asserting dates

I am trying to verify that two items I am storing are the same. However, while testing I am getting an error when checking a Date property.

This is my setUp method:

class InputViewControllerTests: XCTestCase {

    var sut: InputViewController!
    var placemark: MockPlacemark!

    override func setUp() {
        super.setUp()
        let storyboard = UIStoryboard(name: "Main",
                                      bundle: nil)
        sut = storyboard
            .instantiateViewController(
                withIdentifier: "InputViewController")
            as! InputViewController
        _ = sut.view
    }
}

This is the extension of my test class:

extension InputViewControllerTests {
    class MockGeocoder: CLGeocoder {
        var completionHandler: CLGeocodeCompletionHandler?
        override func geocodeAddressString(
            _ addressString: String,
            completionHandler: @escaping CLGeocodeCompletionHandler) {
            self.completionHandler = completionHandler
        }
    }

    class MockPlacemark : CLPlacemark {
        var mockCoordinate: CLLocationCoordinate2D?
        override var location: CLLocation? {
            guard let coordinate = mockCoordinate else
            { return CLLocation() }
            return CLLocation(latitude: coordinate.latitude,
                              longitude: coordinate.longitude)
        }
    }
}

This is my test:

func test_Save_UsesGeocoderToGetCoordinateFromAddress() {
        let dateFormatter = DateFormatter()
        dateFormatter.dateFormat = "MM/dd/yyyy"
        let timestamp = 1456095600.0
        let date = Date(timeIntervalSince1970: timestamp)
        sut.titleTextField.text = "Foo"
        sut.dateTextField.text = dateFormatter.string(from: date)
        sut.locationTextField.text = "Bar"
        sut.addressTextField.text = "Infinite Loop 1, Cupertino"
        sut.descriptionTextField.text = "Baz"
        let mockGeocoder = MockGeocoder()
        sut.geocoder = mockGeocoder
        sut.itemManager = ItemManager()
        sut.save()
        placemark = MockPlacemark()
        let coordinate = CLLocationCoordinate2DMake(37.3316851,
                                                    -122.0300674)
        placemark.mockCoordinate = coordinate
        mockGeocoder.completionHandler?([placemark], nil)
        let item = sut.itemManager?.item(at: 0)
        let testItem = ToDoItem(title: "Foo",
                                itemDescription: "Baz",
                                timestamp: timestamp,
                                location: Location(name: "Bar",
                                                   coordinate: coordinate))
        XCTAssertEqual(item, testItem)
    }

This is the implementation of the save() method:

class InputViewController: UIViewController {
    // ...
    @IBAction func save() {
        guard let titleString = titleTextField.text,
            titleString.characters.count > 0 else { return }
        let date: Date?
        if let dateText = self.dateTextField.text,
            dateText.characters.count > 0 {
            date = dateFormatter.date(from: dateText)
        } else {
            date = nil
        }
        let descriptionString = descriptionTextField.text
        if let locationName = locationTextField.text,
            locationName.characters.count > 0 {
            if let address = addressTextField.text,
                address.characters.count > 0 {
                geocoder.geocodeAddressString(address) {
                    [unowned self] (placeMarks, error) -> Void in
                    let placeMark = placeMarks?.first
                    let item = ToDoItem(
                        title: titleString,
                        itemDescription: descriptionString,
                        timestamp: date?.timeIntervalSince1970,
                        location: Location(
                            name: locationName,
                            coordinate: placeMark?.location?.coordinate))
                    self.itemManager?.add(item)
                }
            }
        }
    }
}

I am having trouble trying to figure out what is wrong with this. The error I am getting is:

test_Save_UsesGeocoderToGetCoordinateFromAddress()] failed: XCTAssertEqual failed: ("Optional(ToDo.ToDoItem(title: "Foo", itemDescription: Optional("Baz"), timestamp: Optional(1456030800.0), location: Optional(ToDo.Location(name: "Bar", coordinate: Optional(__C.CLLocationCoordinate2D(latitude: 37.331685100000001, longitude: -122.03006739999999))))))") is not equal to ("Optional(ToDo.ToDoItem(title: "Foo", itemDescription: Optional("Baz"), timestamp: Optional(1456095600.0), location: Optional(ToDo.Location(name: "Bar", coordinate: Optional(__C.CLLocationCoordinate2D(latitude: 37.331685100000001, longitude: -122.03006739999999))))))") -

As it can be clearly seen, the problem is that the timestamp is not the same in both, and I have no idea why it is changing.

Testing web module from test module

I have two modules which both use spring-mvc. One is a web application that has controllers that I need to test and another module that will be used for testing purposes. I need to be able to load the web module from the testing module in order to test my controllers. The part that I am stuck in is that when the test fires a call to the URI that a contoller is mapped to I get a 404 error. That means that web application is not running and has not mapping to that URI. Anyone knows how I can solve this?

How to test the action inside a function that updates the state

I have some difficulty in understanding how to test my action inside a function:

    componentDidMount() {
    $.ajax(
        {
            url: this.props.url,
            dataType: 'json',
            cache: false,
            success: function (data) {
                if (this.props.url === "./information.json")
                    this.props.updateInfosAction(data.transport);
                else
                    this.props.updateNotifsAction(data.transport);
            }.bind(this),
            error: function (xhr, status, err) {
                console.error(this.props.url, status, err.toString());
            }.bind(this)
        }
    );
}

Here is my test to be sure that ComponentDidMount is called (it works):

    it('Should call component did mount', () => {
   expect(GetInfo.prototype.componentDidMount.calledOnce);
});

But I have no idea how to actually test the action.

compare program results on cpp and python

I would like to test cpp code using Python.

I have next code and a.exe file, which I get after complilation:

int main() {
    std::istream& input_stream = std::cin;
    std::ostream& output_stream = std::cout;
    Data input_data = ReadData(input_stream);
    Data output_data = DoSomethingWithData(input_data);
    OutputData(output_data, output_stream);
    return 0;
}

And I have py code:

input_data = ''
for line in sys.stdin:
    input_data += line
output_data = do_something_with_data(input_data)
print(output_data)

I would like to make py script, which can give equal input to cpp programm and py programm and compare outputs. Is there an easy way to do it?

Mobile hybrid app testing using Codedui

I was working on a ionic mobile hybrid app and looking for automation testing tools. I found Codedui a solution for testing windows/web application and I was wondering if this can be used for testing ionic apps as well. I will appreciate if someone who has done similar task can share his views. Many thanks.

End to end testing framework for C++ application

I am testing a C/C++ application. For the majority of methods I was able to write unit tests using CppUTest. But there are a few for which not, and I want to write integration/end to end tests to test these methods too. What I want to test whether

  • correct output file is generated

  • for invalud arguments proper error messages are printed (it is a command line tool)

  • it displays correct output messgaes

My question is whether there are tools for this, or should I write some scripts to invoke my application, capture output etc.? If yes, how to start these scripts? Invoke them from CppUtest?

meaning of sequences of events in testing

When testing a program, inputs and outputs are typically values or sequences of events.

What is the meaning of sequences of events? Would you please give me an example?

Android UIautomator swiping ViewAnimator

I'm testing my app but have a problem with the DatePicker. All I need is to swipe up until another month appears (searching for a specific date).

Screenshot

The structure is a bit tricky though since there is a ViewAnimator (The calendar) showing a ListView (one month). So I can't scroll down this list view since it only contains one month. Instead I need to swipe up the viewAnimator. The problem is, I can't find any method for scrolling until a specific position like with ListViews.

Structure

Is there a method like

listView.scrollTextIntoView("November 2018");

The only half working solution I found was

swipe(Direction.UP, 0.1f);

which is basically just doing a (kinetic) swipe. So I can't really estimate how often I have to do that until I reach "November 2018" for instance. Any ideas?

How to test method which uses Random() but can't pick the same number twice

I'm creating a little java game which randomly picks from a list of Primitive data (using Random().nextInt()) types and asks which type is larger (or if they're both the same). I've also made it so that if the same Primitive data type is picked then random.nextInt() is called again to ensure the selections are different.

My trouble now comes in testing that the code works. Below is the Game.class:

static final PrimitiveDataType BOOLEAN_TYPE = new PrimitiveDataType("boolean", 0);
    static final PrimitiveDataType BYTE_TYPE = new PrimitiveDataType("byte", 8);
    static final PrimitiveDataType SHORT_TYPE = new PrimitiveDataType("short", 16);
    static final PrimitiveDataType CHAR_TYPE = new PrimitiveDataType("char", 16);
    static final PrimitiveDataType INT_TYPE = new PrimitiveDataType("int", 32);
    static final PrimitiveDataType LONG_TYPE = new PrimitiveDataType("long", 64);
    static final PrimitiveDataType FLOAT_TYPE = new PrimitiveDataType("float", 32);
    static final PrimitiveDataType DOUBLE_TYPE = new PrimitiveDataType("double", 64);
static List<PrimitiveDataType> PRIMITIVE_TYPES = Arrays.asList(BOOLEAN_TYPE, BYTE_TYPE, SHORT_TYPE, CHAR_TYPE,
        INT_TYPE, LONG_TYPE, FLOAT_TYPE, DOUBLE_TYPE);

static List<PrimitiveDataType> chosenDataTypes = new ArrayList<PrimitiveDataType>();

private static int numberOfQuestions; 

static Random numberGenerator = new Random();

static void setChosenDataTypeIndexs(Random numberGenerator) {




    int choice1 =  numberGenerator.nextInt(PRIMITIVE_TYPES.size()-1)+0;
    int choice2 =  numberGenerator.nextInt(PRIMITIVE_TYPES.size()-1)+0;

    System.out.println("Random Roll (1) " +choice1);
    System.out.println("Random Roll (2) " +choice2);
    do {

        choice2 = numberGenerator.nextInt(PRIMITIVE_TYPES.size()-1)+0;

    } while (choice1==choice2);

    Game.chosenDataTypes.add(PRIMITIVE_TYPES.get(choice1));
    Game.chosenDataTypes.add(PRIMITIVE_TYPES.get(choice2));

}

static PrimitiveDataType getChosenDataTypeIndexs(int i) {
    return chosenDataTypes.get(i);
}

public static void setNumberOfQuestions(int i) {

    numberOfQuestions = i;

}

I've had a go with writing a testclass with Mockito, but i'm not sure if i'm mocking correctly due to getting passing test with the dice roll outputting the same number. Also, if i mock out the output of Random.nextInt() to a specific wouldn't this create an infinite loop as it looks for a different number?

public class GameTest {





@Test
public void getChosenDataTypesTest(){


    Random randomNumberMock = Mockito .mock(Random.class);
    when(randomNumberMock.nextInt()).thenReturn(1);

    Game.setChosenDataTypeIndexs(randomNumberMock);

    assertNotEquals(Game.chosenDataTypes.get(0), Game.chosenDataTypes.get(1));

    verify(randomNumberMock,times(2)).nextInt();




}
@Test
public void setNumberOfQuestionsTest(){




    Game.setNumberOfQuestions(1);

    assertEquals(1,Game.getNumberOfQuestions());


}

jeudi 24 novembre 2016

Record test coverage per test case using eclEmma tool

I want to record test coverage per test case using eclEmma tool. The coverage should contain the % covered by that test case of the target class and also want to access the statements executed by that test case. Follwowing is the code which runs a test class and generates the coverage on test class itself.

package expJaCoCo;
public class Calculadora
{
    public Calculadora() { }

    public int add(int x, final int y) {
        return x + y;
    }
}

CalculadoraTest.java

package expJaCoCo;
import junit.framework.TestCase;
import org.junit.BeforeClass;
import org.junit.AfterClass;
import org.junit.Test;
public class CalculadoraTest extends TestCase
{
    private Calculadora c1;

    @BeforeClass
    public void setUp() { c1 = new Calculadora(); }

    @AfterClass
    public void tearDown() { c1 = null; }

    @Test
    public void testAdd() { assertTrue(c1.add(1, 0) == 1); }
}

CoreTutorial.java

package expJaCoCo;
import java.io.InputStream;
import java.util.HashMap;
import java.util.Map;
import org.jacoco.core.analysis.Analyzer;
import org.jacoco.core.analysis.CoverageBuilder;
import org.jacoco.core.analysis.IClassCoverage;
import org.jacoco.core.analysis.ICounter;
import org.jacoco.core.data.ExecutionDataStore;
import org.jacoco.core.instr.Instrumenter;
import org.jacoco.core.runtime.IRuntime;
import org.jacoco.core.runtime.LoggerRuntime;
import org.junit.runner.JUnitCore;
import org.junit.runner.Result;
public class CoreTutorial
{
    /**
     * A class loader that loads classes from in-memory data.
     */
    public static class MemoryClassLoader extends ClassLoader
    {
        private final Map<String, byte[]> definitions = new HashMap<String, byte[]>();
        /**
         * Add a in-memory representation of a class.
         * 
         * @param name name of the class
         * @param bytes class definition
         */
        public void addDefinition(final String name, final byte[] bytes) {
            definitions.put(name, bytes);
        }
        @Override
        protected Class<?> loadClass(final String name, final boolean resolve) throws ClassNotFoundException
        {
            final byte[] bytes = definitions.get(name);
            if (bytes != null)
                return defineClass(name, bytes, 0, bytes.length);
            return super.loadClass(name, resolve);
        }
    }
    private InputStream getTargetClass(final String name)
    {
        final String resource = '/' + name.replace('.', '/') + ".class";
        return getClass().getResourceAsStream(resource);
    }
    private void printCounter(final String unit, final ICounter counter)
    {
        final Integer missed    = Integer.valueOf(counter.getMissedCount());
        final Integer total     = Integer.valueOf(counter.getTotalCount());

        System.out.printf("%s of %s %s missed%n", missed, total, unit);
    }
    private String getColor(final int status)
    {
        switch (status) {
        case ICounter.NOT_COVERED:
            return "red";
        case ICounter.PARTLY_COVERED:
            return "yellow";
        case ICounter.FULLY_COVERED:
            return "green";
        }
        return "";
    }
    private void runTutorial() throws Exception
    {
        final String targetName = CalculadoraTest.class.getName();
        // For instrumentation and runtime we need a IRuntime instance to collect execution data:
        final IRuntime runtime = new LoggerRuntime();
        // The Instrumenter creates a modified version of our test target class that contains additional probes for execution data recording:
        final Instrumenter instr = new Instrumenter(runtime);
        final byte[] instrumented = instr.instrument(getTargetClass(targetName));
        // Now we're ready to run our instrumented class and need to startup the runtime first:
        runtime.startup();
        // In this tutorial we use a special class loader to directly load the instrumented class definition from a byte[] instances.
        final MemoryClassLoader memoryClassLoader = new MemoryClassLoader();
        memoryClassLoader.addDefinition(targetName, instrumented);
        final Class<?> targetClass = memoryClassLoader.loadClass(targetName);
        // Here we execute our test target class through its Runnable interface:
        /*final Runnable targetInstance = (Runnable) targetClass.newInstance();
        targetInstance.run();*/
        JUnitCore junit = new JUnitCore();
        Result result = junit.run(targetClass);
        System.out.println(result.getRunTime());
        // At the end of test execution we collect execution data and shutdown the runtime:
        final ExecutionDataStore executionData = new ExecutionDataStore();
        runtime.collect(executionData, null, false);
        runtime.shutdown();
        // Together with the original class definition we can calculate coverage information:
        final CoverageBuilder coverageBuilder = new CoverageBuilder();
        final Analyzer analyzer = new Analyzer(executionData, coverageBuilder);
        analyzer.analyzeClass(getTargetClass(targetName));
        // Let's dump some metrics and line coverage information:
        for (final IClassCoverage cc : coverageBuilder.getClasses())
        {
            System.out.printf("Coverage of class %s%n", cc.getName());
            printCounter("instructions", cc.getInstructionCounter());
            printCounter("branches", cc.getBranchCounter());
            printCounter("lines", cc.getLineCounter());
            printCounter("methods", cc.getMethodCounter());
            printCounter("complexity", cc.getComplexityCounter());
            for (int i = cc.getFirstLine(); i <= cc.getLastLine(); i++) {
                System.out.printf("Line %s: %s%n", Integer.valueOf(i), getColor(cc.getLine(i).getStatus()));
            }
        }
    }
    public static void main(final String[] args) throws Exception {
        new CoreTutorial().runTutorial();
    }
}

This example executes and instrument CalculadoraTest and provide the coverage of CalculadoraTest.java, but I want the coverage of Calculadora.java How can I change the code to get the desired result.

Having Travis-CI check files into Github?

I have a requirement for some very basic static file hosting, something that GitHub pages can easily handle - HTML & images only.

However the HTML and images are generate in a Travis-CI test script - so after the travis build is done, I want it to push a directory of artifacts back into Github.

Preferably into a git repo different to the one that triggered the build, but within the same GitHub organisation.

I know I can probably write a script that does the pull and push into the repo, but I'm unsure if I need to give travis extra keys or hooks.

Is this possible?

how to write c unit tests for a basic calculator

Im trying to know how we can do some basic tests in c programs. For example I have a basic command line calculator program and I want to verify if there are bugs.

And I will need to run the same tests several times always with the same input and output and I will do some little changes in each iteration.

So instead of always be inserting the same values and check if the result is the expected, I created a little program to automate this process. The program reads each line of a text file to insert in the calculator program the inputs and compare the expected output with the output obtained.

For example the text file is like this:

10 - 5 = 5 (test case: subtraction of positive integers)
10 - 0  = 10 (test case: subtraction with zero)
10 - (-5) = (test case: subtraction with negative integers)
 ...

So in this file have the test cases where I have the inputs and outputs (the expected result) to test the subtraction operation.

But I would like to know if there arent any frameworks that are really used and more precise to do this kind of test. I was reading that unity can do this but I already tried to use but Im getting always issues that Im can not solve.

So I would like to know if you know how to this in unity or in other system and if can provide a basic exampel so I can understand?

For example I was trying to test the first test case, the subtraction of positive integers:

This is a function that I was trying to do to check the expected result with the actual result:

void test_case_subtraction_of_positive_integers(int x, int y){
    UNITY_BEGIN();
    int expected = 5;
    int actual = x-y;
    TEST_ASSERT_EQUAL(expected, actual);
    return UNITY_END();
}

This is a function of the calculator program responsible for the subtraction:

void subtraction()
{
    int a, b, c = 0;
    printf("\nPlease enter first number  : ");
    scanf("%d", &a);
    printf("Please enter second number : ");
    scanf("%d", &b);
    c = a - b;
    runTestSubtraction(10, 5);
    RUN_TEST();
    printf("\n%d - %d = %d\n", a, b, c);

}

What's a good way to test a VHDL microprocessor?

I've built a 16-bit RISC pipelined processor in VHDL based on an ISA that was developed in my college. The ISA has a lot of corner cases and I'm wondering how to go about testing the VHDL code.

I've currently built a basic testbench which reads a hex file (generated by an assembler) and loads the data into the microprocessor, providing the clk and reset input signals.

Currently, I'm just trying out random assembly programs and looking at the gtkwave output trying to see whether the relevant signals have the correct values.

  1. Is there a systematic way of testing a large digital system such as a microprocessor?
  2. What's the best way to identify corner cases of an ISA? I'm willing to write some scripts if they will help me generate some test cases.
  3. What would be a good VHDL testbench for such a system? I'm finding it hard to think of a good solution as I'm sometimes unsure of the number of clock cycles it would take the processor to compute some result.

Improving the readability of Capybara tests

I'm writing a rspec/capybara test that ensures input fields in a form display the correct validations.

I'm concerned that my code is not very readable. How can I refactor this to make sure that it's readable?

describe "#page" do
  context "form validation" do
    1.upto(4) do |index|
      it "throws correct validation error when #{5-index} field(s) is (are) empty" do
        login(page_path)
        fill_in_form(index)
        find('.button').trigger(:click)
        expect(all('.form-error').count).to eq 5-index
        all('.form-error')[-1...index-5].each do |error|
          expect(error.text).to eq "#{@inputs[5-index][:error_message]} is required"
        end
      end
    end
  end
end


def fill_in_form(number_of_fields)
  (0).upto(number_of_fields-1) do |i|
    fill_in(@inputs[i][:name], :with => (@inputs[i][:value]))
  end
end

def login(path)
  visit(path)
  # redirected to login
  acceptance_login(@user)
  visit(path)
  # input fields
  @inputs = [
    {name: 'first_name', error_message: "First name is not valid", value: "John"},
    {name: 'last_name', error_message: "Last name is not valid", value: "Doe"},
    {name: 'company', error_message: "Company name is not valid", value: "My company"},
    {name: 'role', error_message: "Role is not valid", value: "Developer"},
    {name: 'phone', error_message: "Phone number is not valid", value: "(800) 492-1111"}
  ]
end

write unit test in c using unity

Im trying to automate some tests with unity.

For example I have a c program that works like a very basic calculator where I have for example the division operation.

And I would like to automate the test envioronment. I have some test cases, where the inputs of the program will be always the same. And I want to verify if the output is icual to what was expected. So for the division I have some test cases:

4 / 2 = 2 (teste case to test division with positive numbers)

-40 / -10 = 4 (test case to test division with negative numbers)

40 / 0 = "error msg" (test case to test division by zero)

Now I would like to use unity to automate this tests.

Can you give some help on how we can achieve this? Im trying to do this but I dont gey any correct result. I started to learn the unity basics doing some simple exmaples like:

   #include "unity.h"

    void test1(void)
    {
        TEST_ASSERT(1==1);
        TEST_ASSERT(2==2);
    }
    void test2(void)
    {
       TEST_ASSERT_EQUAL(2,2);
       TEST_ASSERT_EQUAL_INT(2,3);
       TEST_ASSERT_EQUAL_FLOAT(1.1,1.2);
    }

    int main()
    {
        UNITY_BEGIN();
        RUN_TEST(teste_theFirst);
        RUN_TEST(teste_2);
        return UNITY_END();
    }

But how can we use unity to interact to a command line program? For example I have a part of the calculator program relative to the division operation, below:

calc.c:

void division();

printf("Enter / symbol for Division \n");

void division()
{
    int a, b, d=0;
    printf("\nPlease enter first number  : ");
    scanf("%d", &a);
    printf("Please enter second number : ");
    scanf("%d", &b);
    d=a/b;
    printf("\nDivision of entered numbers=%d\n",d);
}

The other part of the program not so relevant for the divison example:

int main()
{
    int X=1;
    char Calc_oprn;
    // Function call
    calculator_operations();
    while(X)
    {
        printf("\n");
        printf("%s : ", KEY);
        Calc_oprn=getche();
        switch(Calc_oprn)
        {
            case '+': addition();
                      break;



            case '/': division();
                      break;

            case 'H':
            case 'h': calculator_operations();
                      break;

            case 'Q':
            case 'q': exit(0);
                      break;
            case 'c':
            case 'C': system("cls");
                      calculator_operations();
                      break;

            default : system("cls");

    printf("\n**********You have entered unavailable option");
    printf("***********\n");
    printf("\n*****Please Enter any one of below available ");
    printf("options****\n");
                      calculator_operations();
        }
    }
}
//Function Definitions
void calculator_operations()
{
    //system("cls");  use system function to clear
    //screen instead of clrscr();
    printf("\n             Welcome to C calculator \n\n");

    printf("******* Press 'Q' or 'q' to quit ");
    printf("the program ********\n");
    printf("***** Press 'H' or 'h' to display ");
    printf("below options *****\n\n");
    printf("Enter 'C' or 'c' to clear the screen and");
    printf(" display available option \n\n");
    printf("Enter * symbol for Multiplication \n");
    printf("Enter / symbol for Division \n");

}

Maven install does not encode in UTF-8 even if configured

Hi I have a problem with the encoding of my project.

When I run JUnit tests from eclipse, there are no failures. The problem is when I do maven > clean maven > install, one of the tests fails.

I have this string: "ADMINISTRACIÓN", and it's fine when i run the JUnit from eclipse, but I've printed the variable and when maven does the tests, the value of this string is: "ADMINISTRACI�N".

I've changed every property I could find of encoding in eclipse to UTF-8. -Configured the pom this way:

      (...)
      <project>
        <properties>
            <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
            (...)
       </properties>
      </project>
      (...)
      <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-compiler-plugin</artifactId>
            <version>3.6.0</version>
            <configuration>
              <encoding>UTF-8</encoding>
                <source>1.7</source>
                <target>1.7</target>
            </configuration>
        </plugin>
     </plugins>
     (...)

But the output is the same. I have a coworker that has the same project than me, and the same eclipse client and config, and her maven tests print accents with no trouble.

Any further ideas?

Thanks a lot!

HTML5/JavaScript bandwidth speed test for mobile app

I am struggling with one problem since a couple of days.

I am developing a mobile app in HTML5. My client wants to add a Bandwidth Speed Test option on it.

So I started looking around on the web, what was done, I found out that almost everything its done using ADOBE FLASH ... wont work... it needs to be Javascript/HTML5.

I found this:

http://ift.tt/2fsG5oY

I tried to use it but ... I receive an error during the download test:

(console output of the example page)

  • [Download] Restarting measures with 10.000 MB of data
  • [Download] The minimum delay of 8.000 seconds has not been reached
  • [Download] Restarting measures with 10.000 MB of data...
  • [Download] Final average speed: NaN MBps [Download] Finished measures

I sincerily dont know what the problem is, I tried to debug the code but I could not get with the reason of this NaN (Not a Number) Error.

Any Help? Suggestions? Other solutions?

Remember I am developing a mobile solution (html5)

Plug-in “org.codecover.eclipse” was unable to instantiate class “org.codecover.eclipse.junit.JUnitLaunchConfigurationDelegate” while using JUnit

I've been trying to run my test classes with the CodeCover framework, but I am getting an error when I use the configuration Run As -> CodeCover Measurement for JUnit.

I am not sure what the problem is, as I have already ran these test classes successfully as junit tests. Please advise.

Cordova app testing

I am writing a test harness for Cordova app. Is it possible to write unit tests on Cordova apps, if so can some one explain how and link me to some resources.

Currently I am using automation only, Can someone link me to some helpful automation testing resources?

I am lost

Testing password update - Rspec, Capybara, Devise

I'm pretty new to ROR and testing and i'm trying to test a password update to a RegistrationsController inherited from DeviseRegistrationsController.

I included at the end of my controller an update_resource method, showed here

My controller

class Users::RegistrationsController < Devise::RegistrationsController

 ...

 def update_resource(resource, params)
   if params[:password].blank? && params[:password_confirmation].blank?
     resource.update_without_password(params)
   else
     super
   end
 end

end

My controller test file

require "rails_helper"

RSpec.describe Users::RegistrationsController, type: :controller do

  describe "update password" do
    before do
      @request.env['devise.mapping'] = Devise.mappings[:user]
    end
    let(:user){ FactoryGirl.create(:user, password: 'current_password', password_confirmation: 'current_password') }

    context "with a valid password parameter" do
      it "updates user in the database" do

        put :update, params: { id: user, user: FactoryGirl.attributes_for(:user, password: '12', password_confirmation: '12') }
        user.reload

        expect(user.password).to eq("newpassword12")
      end
    end
  end

end

I'm receiving the follow error

 2) Users::RegistrationsController update password with a valid password parameter updates user in the database
 Failure/Error: expect(user.password).to eq("newpassword12")

   expected: "newpassword12"
        got: "current_password"

   (compared using ==)
 # ./spec/controller/users/registrations_controller_spec.rb:75:in `block (4 levels) in <top (required)>'

Any idea what am i doing wrong?

Test: How to verify that a method is called?

Many Mock frameworks has a feature to verify if a method is called or not. However most frameworks requires that the code follows dependency injection pattern.

The code that i'm trying to test does NOT uses dependency injection pattern, therefore a mock of the object can not be injected.

Code ex.:

public class TestMeClass{
   public void TransformMe() {}
}

 public abstract class SomeeClass{
   public SomeMethod(){
    CallMeMethod();
   }

   private void CallMeMethod() {
      TestMeClass testMeClass = new TestMeClass();
      testMeClass.TransformMe();
   }
}

How can I verify that TransformMe() is called?

Jon Skeet I need you.