jeudi 31 mars 2016

When I execute the script "(exec -l -a specialname /bin/bash -c 'echo $0' ) 2> error", why it output ^[7^[[r^[[999;999H^[[6n to error

when I do the bash test, the run-builtins fails, after some search, I found that it output ^[7^[[r^[[999;999H^[[6n to the stderr, so I redirect it to a file error. I cat it that it output a blank line, I open it using vim which I found the "^[7^[[r^[[999;999H^[[6n", why?

How to apply "for each" loop coverage testing?

I was wondering if "for" loop coverage can be applied to "for each" loops as well.. If so, how can it be done on the following code sample?

1. public static void foreachDisplay(int[] data){
  2. System.out.println("Display an array using for 
  3. each loop");
  4. for (int a  : data) {
    5. System.out.print(a+ " ");
    6. }
  7. }

Thank you.

Use console to test classes

I'm creating an application using C# WPF. I'm early in the development stage, so I have quite got to making my interface yet, is it possible for me to create tests and run them using the console?

Laravel testing without firing cache events

I am using the array cache driver while testing, however I want to disable the Cache Events.

I can do this with

$this->withoutEvents();

I'd rather just stop the one event, however

$this->expectsEvents(Illuminate\Cache\Events\KeyForgotten::class);

will throw an error if the event is not called.

One solution would be a function that on the outside allows an event to fire and hides it but doesn't throw an error if the event doesn't occur.

I think I need to mock the Events Dispatcher like so

$mock = Mockery::mock('Illuminate\Contracts\Events\Dispatcher');

$mock->shouldReceive('fire')->andReturnUsing(function ($called) {
    $this->firedEvents[] = $called;
});

$this->app->instance('events', $mock);

return $this;

The question would be how to carry on dispatching the non caught events?

How can I test HTTPS endpoints in go?

So I'm trying to mock out a response to an https request using httptest.TLSServer but the http.Client making the request keeps telling me that the server is giving an invalid http response. I bet this is because there's no actual SSL cert. Is there any way I can get the client to ignore tls while still mocking out an https request?

The test code in question

func TestFacebookLogin(t *testing.T) {
    db := glob.db
    server := httptest.NewTLSServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        resp := `
        {
          "id": "123",
          "email": "blah",
          "first_name": "Ben",
          "last_name": "Botwin"
        }
        `

        w.Write([]byte(resp))
    }))
    defer server.Close()

    transport := &http.Transport{
        Proxy: func(r *http.Request) (*url.URL, error) {
            return url.Parse(server.URL)
        },
    }

    svc := userService{db: db, client: &http.Client{Transport: transport}}

    _, err := svc.FacebookLogin("asdf")

    if err != nil {
        t.Errorf("Error found: %s", err)
    }

}

And the request that I'm trying to make:

url := "https://<Some facebook api endpoint>"
req, err := http.NewRequest("GET", url, nil)
resp, err := client.Do(req)
if err != nil {
    return nil, err
}
defer resp.Body.Close()

Why is my Specflow [AfterTestRun] hook called twice

[AfterTestRun]

This hook for me is being called twice.

My C# code is correct and at the end of each Scenario I am saving my results to a Concurrent Bag.

Then I use the [AfterTestRun] hook to call the Concurrent Bag and save the data to a database. I see duplicated data, so I assume it’s being called twice.

Additional Info:
I am using SpecRun to run my tests in parallel with the following profile

Execution stopAfterFailures="1" retryCount="0" testThreadCount="3" testSchedulingMode="Sequential"

Also, how will this hook behave if one has multiple scenarios within each feature? Currently I have 1.

Thanks

Scalatra filter execution after request int test

I faced with the problem that filter executes after controller while writing tests using scalatra-test. Here is the code snippet:

class MyTest extends ScalatraSuite with FunSuiteLike {
    addFilter(calssOf[MyFilter], "/*")
    addServlet(classOf[MyController], "/*")

    test("my test desc") {
        get("/my/path") {
            ...
        }
    }
}

In debug mode I noticed that get("/my/path") method is executed before filter's get("/*") method. This issue occures only when running tests.
Please advice.

Unit Testing Shiny Apps

So I have been writing a fairly detailled shiny app, and in the future will need updating as the functionality behind what is run is constantly changing.

What I need to be able to do is have unit tests (either using testthat or another library more useful for shiny apps) that enables me to run these tests in a more automated fashion.

I have written a simple shiny app. For the sake of testing in this would like a way to know that if I choose the number 20 in the numeric input then I get 400 as the output$out text. But want to be able to do this without actually running the app myself.

library(shiny)

ui <- fluidPage(title = 'Test App', 
    numericInput('num', 'Number', 50, 1, 100, 0.5),
    'Numeric output',
    textOutput('out')
)

server <- function(input, output, session) {
  aux <- reactive(input$num ^ 2)

  output$out <- renderText(aux())
}

shinyApp(ui = ui, server = server)

What kind of JUnit test or other test can I do in Java to test my program? [on hold]

I have developed several parsing Java program. It essentially takes a file, and extract data from it and transform these data into values as an INSERT line. Then an sql file is generated from the Java Program.

One example would be parsing the tab-separated values to SQL file. What kind of JUnit test or other test can I perform in Java?

Describe device regression testing

How could you describe device regression testing and how it will be carried out, I know what is regression testing but this term is very new for me. I understood this is practised in some companies.

One to One relationship in factory - Integrity Error

I'm using factory_boy to create the factories of the app I'm working on. I'm having an issue when trying to create the factory of a model which has a one to one relationship to another model.

Here are the models:

class Playlist(AccountDependantMixin, models.Model):
    image = models.ImageField(_('image'), upload_to='playlist/images/', blank=True, null=True)

    title = models.CharField(_('title'), max_length=100)
    sub_title = models.CharField(_('sub_title'), max_length=100)
    description = models.TextField(_('sub_title'), max_length=1000, blank=True, null=True)

    _ranking = models.IntegerField(_('ranking'), default=0)
    creator = models.ForeignKey('core.User', related_name="playlists")
    categories = models.ManyToManyField('core.Category', related_name="playlists", blank=True)

    is_published = models.BooleanField(_('published'), default=False)
    is_shared = models.BooleanField(_('shared'), default=False)

    visible_for = models.ManyToManyField('core.Group', related_name="visible_playlists")
    mandatory_for = models.ManyToManyField('core.Group', related_name="mandatory_playlists")

    test = models.OneToOneField('core.PlaylistTest', related_name='playlist')

    created_on = models.DateTimeField(auto_now_add=True)

    amount_of_views = models.IntegerField(blank=True, default=0)

class PlaylistTest(Test):
    pass

This are the factories:

class PlaylistTestFactory(factory.DjangoModelFactory):
    class Meta:
        model = PlaylistTest


class PlaylistFactory(factory.DjangoModelFactory):
    class Meta:
        model = Playlist
    title = factory.Sequence(lambda n: 'playlist%d' % n)
    creator = factory.SubFactory(InstructorUserFactory)
    sub_title = factory.Sequence(lambda n: 'sub_title%d' % n)
    description = factory.Sequence(lambda n: 'description%d' % n)
    test = factory.RelatedFactory(PlaylistTestFactory)
    is_published = True

And here is how I'm trying to initialize the instance with the factory:

self.playlist = PlaylistFactory(creator=AdminUserFactory(account=self.account))

I'm getting the following error:

IntegrityError: null value in column "test_id" violates not-null constraint
DETAIL:  Failing row contains (1, , playlist0, sub_title0, description0, 0, t, f, 2016-03-31 12:49:23.739207+00, 0, 2, 1, null).

I have not found much documentation about one to one relationships and factory boy, and what I have found has not been useful to solve the problem. I thought it could be because the PlaylistTest model was empty, but I added some fields to it and the problem persisted.

How to get Symfony profiler in Behat Test

I was wondering on how to get the Symfony profiler within a behat test? I've read the offical docs about it but I keep getting the error

"You need to tag the scenario with @mink:symfony2".

I think this is a configuration problem:

behat.yml

default:
  suites:
    default:
      contexts:
       - FeatureContext:
           em: '@doctrine.orm.entity_manager'
           router: '@router'
           tokenStorage: '@security.token_storage'
           encoder: '@security.password_encoder'
           session: '@session'

  extensions:
    Behat\Symfony2Extension: ~
    Behat\MinkExtension:
      base_url: "http://ift.tt/1Mxg9E2"
      goutte: ~
      selenium2:
        wd_host: "http://ift.tt/MlyWIA"
      sessions:
        default:
          symfony2: ~

config_test.yml

framework:
  test: ~
  session:
    storage_id: session.storage.mock_file
  profiler:
    collect: true

web_profiler:
  toolbar: false
  intercept_redirects: false

swiftmailer:
  disable_delivery: true

My feature context extends MinkContext and I am using Selenium for driving my tests that require javascript. I've tagged the feature that I want to execute the following:

@mink:symfony2
@javascript
Scenario: User send mail
# ...

Gatling user injection for 50 total users in 1 hour adding 10 users per 5 minutes

I need to setup a Gatling Test with a total of 50 concurrent users, but I have a problem because there is no choice to get it.

I use rampUsers(10) over (60 minutes) but it gets only 10 concurrent users.
Using constantUsersPerSec(users) during (60 minutes) is too stressful.

Is there any suggestion?

Thanks.

How to click a String in Eggplant if it displays in multiple line on the device

I am using Eggplant functional and I am creating a script to click a text Example I want to click a text "I want to click it". Since I am testing it on multiple mobile devices , The given text is not coming in a single line and hence I am not able to click it.

Java code Testing

I have few java packages in my workspace. I have made a change in one of the package's class. The change is addition of a new method. How should I go about checking whether the new change is backward compatible or not ? Should this be done as part of integration testing ?

I have added a value field to enum class and to extract the value field , I have added a getter method. Fyi, in the database enum's are being stored.

How should I go about this checking in the easiest possible way ?

How detailed my manual test case should be?

I'm testing a business app and my boss keeps insisting that my test cases are too detailed and bring no value for the company. For UI and functionality testing I was just testing every textbox, menu etc. making proper test case in MTM.

How much details should I include in test cases? How detailed they should be?

Gradle TestListener logging

I've written a GradleTestListener (implements org.gradle.api.tasks.testing.TestListener) and registered this in build.gradle

test{addTestListener(new com.abcd.GradleTestAdaptor())}

This test listener uses a third party library which is printing a warning + stacktrace via Slf4J to the console. This is causing a lot of noise in the test output so I would like to suppress it. I have configured Slf4j & Log4J as dependencies on my project and placed log4j.xml with root level error in src/test/resources. The warning is still being printed so it appears my logging configuration is not being picked up.

I'm wondering, as the TestListener is part of the build (akin to a plugin), will it pick up my logging configuration? If not, make it so?

No Karma report on singlerun = false

I wonder if I am missing something trivial here, but I can not see any test reports if I have set up singlerun to true in karma config. It only shows that the browsers were launched and that is it. I can click on DEBUG and inspect the browser console log that way, but I would feel that one should be also see the results in the terminal too.

Thanks for the help!

My karma.config.js:

basePath: '../',

// start these browsers
// available browser launchers: http://ift.tt/1ft83KU
browsers: ['PhantomJS'],

frameworks: ['mocha', 'chai'],

files: [
  { pattern: 'test/vendor/indexeddbshim.min.js', watched: false },
  { pattern: 'tests.webpack.js', watched: false },
],

preprocessors: {
  'tests.webpack.js': ['webpack'],
},

webpack: {
  resolve: {
    root: [
      path.resolve('./test/vendor'),
    ],
    alias: {
      backbone: 'backbone',
      underscore: 'underscore',
    },
  },
  module: {
    loaders: [
      {
        // test: /^\.js$/,
        exclude: /(node_modules|bower_components|vendor)/,
        loader: 'babel-loader',
      },
    ],
  },
},

webpackServer: {
  noInfo: true,
},

// enable / disable watching file and executing tests whenever any file changes
autoWatch: false,

// test results reporter to use
// possible values: 'dots', 'progress'
// available reporters: http://ift.tt/1ft83KQ
reporters: ['progress'],

// web server port
port: 9876,

// enable / disable colors in the output (reporters and logs)
colors: true,

// Continuous Integration mode
// if true, Karma captures browsers, runs the tests and exits
singleRun: false,

plugins: [
  require('karma-webpack'),
  require('karma-mocha'),
  require('karma-chai'),
  require('karma-phantomjs-launcher'),
  require('karma-chrome-launcher'),
],
logLevel: config.LOG_INFO,   }); 

Discourse User-Created Event Breaks Post Creation Tests

In Discourse, I'm trying to setup a user created event hook, but when I do so, it breaks some tests related to post creation. The new event -- on another model -- seems to be causing topic_created to fire multiple times while preventing post_created from firing.

in /app/models/user.rb

after_create do
  DiscourseEvent.trigger(:user_created, self)
end

Relevant test output

Failures:

  1) PostCreator new topic success triggers extensibility events
     Failure/Error: creator.create
     Mocha::ExpectationError:
       unexpected invocation: DiscourseEvent.trigger(:topic_created, #<Topic:0x7fcbdc666ae8>, {:title => 'hello world topic', :raw => 'my name is fred', :archetype_id => 1}, #<User:0x7fcbdd907c10>)
       unsatisfied expectations:
       - expected exactly once, not yet invoked: DiscourseEvent.trigger(:post_created, anything, anything, #<User:0x7fcbdd907c10>)
       - expected exactly once, invoked twice: DiscourseEvent.trigger(:topic_created, anything, anything, #<User:0x7fcbdd907c10>)
       satisfied expectations:
       - expected at least once, invoked 4 times: DiscourseEvent.trigger(:markdown_context, anything)
       - expected exactly once, invoked once: DiscourseEvent.trigger(:after_trigger_post_process, anything)
       - expected exactly once, invoked once: DiscourseEvent.trigger(:before_create_topic, anything, anything)
       - expected exactly once, invoked once: DiscourseEvent.trigger(:after_validate_topic, anything, anything)
       - expected exactly once, invoked once: DiscourseEvent.trigger(:validate_post, anything)
       - expected exactly once, invoked once: DiscourseEvent.trigger(:before_create_post, anything)
     # ./lib/post_creator.rb:220:in `trigger_after_events'
     # ./lib/post_creator.rb:150:in `create'
     # ./spec/components/post_creator_spec.rb:82:in `block (4 levels) in <top (required)>'

what are the important things to consider when testing read and write speed of network servers?

I want to learn about performance testing of network servers. I would like to know what are the important things that you should consider when testing the read and write speed of network servers/disks? The next part of the question is concerned with how to test these important factors? any specific tools that you can recommend?

Execute java code with spring dependencies before JMeter test

I'm trying to do some stress tests on an API with JMeter. I have two environments (QA and production) and I want to set up QA's database before running JMeter tests.

I can't use JDBC or MongoDB configuration elements because it's a cloud database (DynamoDB - AmazonWS). I thought about using raw requests with API-token to AmazonWS's API but I'd prefer to use a Java class I already have (a class that create-delete queries to cloud DB) but it has Spring dependencies.

I know that with JMeter I can run some Java code but I don't know how to run classes with Spring dependencies like a BeanFactoryPostProcessor.

Any ideas?

Why xpath Changes for UILabel for UITableViewCell in Appium?

I have UITableViewCell which has Subject,Name,Description,time. When I am testing this on Appium the xpath changed for UIElement for Subject line only. xpath:-subject line: //UIAApplication[1]/UIAWindow[1]/UIATableView[1]/UIATableCell[3]**/UIAStaticText[2]**

after changes xpath:-subject line: //UIAApplication[1]/UIAWindow[1]/UIATableView[1]/UIATableCell[3]**/UIAStaticText[3]**

Which is giving another UI element details. Which fails the testing. Please, help me how to handle this from Appium or do I need to fix it from Xcode.

When I try to run python file I got this error

Error:

AttributeError: `NoneType` object has no attribute `decompressobj`

My Program ::-

from selenium import webdriver
browser = webdriver.Firefox()
browser.get('http://www.ubuntu.com/')

mercredi 30 mars 2016

If the client keeps on changing the requirements every now and then, then what testing method should be followed?

I always perform regression testing as soon as the changes come up. The case is the client comes up with changes or additional requirement every now and then and that makes the case more messy. I test something and then the whole things get changed. Again I have to test the changed module and perform integration testing with other modules that is linked with it.

How to deal with such cases?

Need the answer urgent.

Thanks.

What steps are required to increase performance of Android application?

Do we have any checklist to check and increase performance of app before launching android application in market?

Please let me know if any tool is available for performance testing.

How to test crashes in Android for multiple devices?

We are using Fabric for tracking crashlytics. Daily we receive crash report, those are related to specific devices.

My question is how to avoid crashes in Android, if it is occurred how can I test it. Because most of the crashes are related to Device specific and Network Specific.

  • Is there any tool to identify and test crashes before it is moved to production.
  • Is there any way to test app in different devices for different functionalities.

Python: Nosetests with multiple files

This is a broad question because no one seems to have found a solution to it as yet so I think asking to see a working example might prove more useful. So here goes:

Has anyone run a nosetests on a python project using imports of multiple files/packages?

What I mean is, do you have a directory listing such as:

project/
    |
    |____app/
          |___main.py
          |___2ndFile.py
          |___3rdFile.py
     |____tests/
          |____main_tests.py

Where your main.py imports multiple files and you perform a nosetests from the project file of utilizing a test script in the main_tests.py file? If so please can you screen shot your import section both of all your main files and your main_tests.py file?

This seems to be a major issue in nosetests, with no apparent solution:

Black box testing of webpages in python

Does anyone know of any APIs that i could use to test my website from a blackbox point of view.

I would need to enter some text into a text box and extract the corresponding output for multiple cases on the same page.

I would like to perform a load and stress test on this website.

Pardon my incorrect jargon if there is any as i am extremely new to web development.

Phoenix print log when running "mix test"

I'm trying to debug while a test is not running, I have my test and I'm trying to print something so I can see the values of a tuple when mix test is run. I've tried doing this:

require Logger

test "creates element", %{conn: conn} do
    Logger.debug "debugging #{inspect conn}"
    conn = post conn, v1_content_path(conn, :create), content: @valid_attrs
    ...
    ...
end

But nothing is printed! It's driving me nuts! Here is where I read to do what I'm doing How to pretty print conn content?

How can Visual Studio 2015 WebTest run on SSL secured site without certificate?

I am using Visual Studio 2015 to run Web Performance and Load Tests on our web site. The site uses SSL throughout. The tests run correctly on our production site, but now I need to run them on our 'Staging' site. The staging site does not have a valid security certificate. When I run a webtest, I get the error: "Request failed: The request was aborted: Could not create SSL/TLS secure channel" on every request. What do I need to do to get my webtests running on the staging site?

Thank you for your help.

error C2352 illegal call of non-static member function

I am creating a Heap type priority queue using a dynamically sized array. I am aware that vectors would be simpler to implement, but this is a learning exercise for me. Everything works great, but I am having issues only when attempting some Unit testing in visual studio '13. I'm experiencing this error

Here is the source file where I attempt to run the Unit tests:

//Prog1Test.cpp
#include "UnitTest.h"
#include <iostream>

int main()
{
    PriorityQueue Q = PriorityQueue();
    UnitTest::test1(Q);
    UnitTest::test2(Q);
    UnitTest::test3(Q);
    UnitTest::test4(Q);
    return 0;
}

Here is the UnitTest.cpp:

//UnitTest.cpp
#include "UnitTest.h"
#include <cassert>

void UnitTest::test1(PriorityQueue Q)
{
    Q.clear();
    Q.append('a');
    Q.append('b');
    assert(Q.size() == 2);
    assert(Q.check() == true);
}

void UnitTest::test2(PriorityQueue Q)
{
    Q.clear();
    Q.append('b');
    Q.append('a');
    assert(Q.size() == 2);
    assert(Q.check() == false);
}

void UnitTest::test3(PriorityQueue Q)
{
    Q.clear();
    Q.insert('a');
    Q.insert('b');
    assert(Q.size() == 2);
    assert(Q.check() == true);
    assert(Q.remove() == 'a');
    assert(Q.size() == 1);
}

void UnitTest::test4(PriorityQueue Q)
{
    Q.clear();
    Q.insert('b');
    Q.insert('a');
    assert(Q.size() == 2);
    assert(Q.check() == true);
    assert(Q.remove() == 'a');
    assert(Q.size() == 1);
}

Here is the UnitTest header file:

//UnitTest.h
#ifndef UnitTest_H
#define UnitTest_H
#include "PriorityQueue.h"

class UnitTest
{
public:
    void test1(PriorityQueue Q);
    void test2(PriorityQueue Q);
    void test3(PriorityQueue Q);
    void test4(PriorityQueue Q);
};


#endif

Here is the PriorityQueue class header:

#ifndef PriorityQueue_H
#define PriorityQueue_H

class PriorityQueue
{
private:
    char *pq;
    int length;
    int nextIndex;
    char root;
public:
    PriorityQueue();
    ~PriorityQueue();
    char& operator[](int index);
    void append(char val);
    int size();
    void clear();
    void heapify();
    bool check();
    void insert(char val);
    char remove();
    friend class UnitTest;
};


#endif

here is the priorityqueue.cpp file:

#include<math.h>
#include "PriorityQueue.h"




PriorityQueue::PriorityQueue()
{
    pq = new char[0];
    this->length = 0;
    this->nextIndex = 0;
}




PriorityQueue::~PriorityQueue() {
    delete[] pq;
}




char& PriorityQueue::operator[](int index) {
    char *pnewa;
    if (index >= this->length) {
        pnewa = new char[index + 1];
        for (int i = 0; i < this->nextIndex; i++)
            pnewa[i] = pq[i];
        for (int j = this->nextIndex; j < index + 1; j++)
            pnewa[j] = 0;
        this->length = index + 1;
        delete[] pq;
        pq = pnewa;
    }
    if (index > this->nextIndex)
        this->nextIndex = index + 1;
    return *(pq + index);
}




void PriorityQueue::append(char val) {
    char *pnewa;
    if (this->nextIndex == this->length) {
        this->length = this->length + 1;
        pnewa = new char[this->length];
        for (int i = 0; i < this->nextIndex; i++)
            pnewa[i] = pq[i];
        for (int j = this->nextIndex; j < this->length; j++)
            pnewa[j] = 0;
        delete[] pq;
        pq = pnewa;
    }
    pq[this->nextIndex++] = val;
}



int PriorityQueue::size() {
    return this->length;
}




void PriorityQueue::clear() {
    delete[] pq;
    pq = new char[0];
    this->length = 0;
    this->nextIndex = 0;
}




void PriorityQueue::heapify() {
    char parent;
    char root;
    char temp;
    for (double i = this->length - 1; i >= 0; i--)
    {
        root = pq[0];
        int parentindex = floor((i - 1) / 2);
        int leftchildindex = 2 * i + 1;
        int rightchildindex = 2 * i + 2;
        if (pq[(int)i] <= pq[leftchildindex] && pq[(int)i] <= pq[rightchildindex])
        {
            pq[(int)i] = pq[(int)i];
        }
        else if (rightchildindex < this->length && pq[(int)i] > pq[rightchildindex])
        {
            temp = pq[(int)i];
            pq[(int)i] = pq[rightchildindex];
            pq[rightchildindex] = temp;
            heapify();
        }
        else if (leftchildindex < this->length && pq[(int)i] > pq[leftchildindex])
        {
            temp = pq[(int)i];
            pq[(int)i] = pq[leftchildindex];
            pq[leftchildindex] = temp;
            heapify();
        }
    }
}



void PriorityQueue::insert(char val) {
    char *pnewa;
    if (this->nextIndex == this->length) {
        this->length = this->length + 1;
        pnewa = new char[this->length];
        for (int i = 0; i < this->nextIndex; i++)
            pnewa[i] = pq[i];
        for (int j = this->nextIndex; j < this->length; j++)
            pnewa[j] = 0;
        delete[] pq;
        pq = pnewa;
    }
    pq[this->nextIndex++] = val;
    PriorityQueue::heapify();
}


bool PriorityQueue::check() {
    char root;
    root = pq[0];
    for (int i = this->length - 1; i >= 0; i--)
    {
        if ((int)pq[i]< (int)root)
            return false;
    }
    return true;
}



char PriorityQueue::remove() {
    char root = pq[0];
    char *qminus;
    qminus = new char[this->length];
    for (int i = 1; i<this->length; i++)
        qminus[i - 1] = pq[i];
    pq = qminus;
    this->length -= 1;
    PriorityQueue::heapify();
    return root;
}

Xunit multiple IClassFixtures

My question is How to setup multiple fixtures in one test class?

But the constructor of Zoo class can not handle multiple fixtures.

For exemple:

public class Zoo : IClassFixture<Tiger>, IClassFixture<Wolf>, IClassFixure<Bird>
{
   private IFixture fixture;
   public Zoo(IFixture fixture) 
   { 
    this.fixture = fixture; 
   }

   [Fact]
   public void TestAnimal()
   {
    //Arrange 
    int actualBonesCount = this.fixture.BonesCount;
    int expectedBonesCount = 2;

    //Act & Assert
    Assert.Equal(expectedBonesCount, actualBonesCount );
   }
} 

A tiger class

public class Tiger : FixtureBase
{
   public Tiger()
   {
    this.BonesCount  = 4;
   }
}

A bird class

public class Bird: FixtureBase
{
   public Bird()
   {
    this.BonesCount  = 2;
   }
}

Test fixture base class

public class FixtureBase : IFixture
{
   public int BonesCount { get; set; }
}

And interface

public interface IFixture
{
   int BonesCount { get; set; }
}

Xcode test coverage not covering function with block

I have a LoginViewController with a method validateLoginWithUsername. This in turn calls another method that takes 2 blocks (success and failure) as parameters.

I have two tests that mock and invoke both the success and failure blocks and the test coverage is showing 100% coverage for the validateLoginWithUsername method. However, there are functions with different block numbers with 0% coverage. This can be seen in the screen shot below :

enter image description here

I am wondering what these different block numbers mean and how I can cover these in my tests ?

ChromeDriver cannot navigate to URL in eclipse

So I am very new to cucumber but I'm trying to use the GoogleDriver to go to a certain URL but I receive and error. Here is the first line:

java.lang.IllegalStateException: The path to the driver executable must be set by the webdriver.chrome.driver system property; for more information, see http://ift.tt/GBbVJI. The latest version can be downloaded from http://ift.tt/1hV5c2G at com.google.common.base.Preconditions.checkState(Preconditions.java:177)

My code:

package cucumber.features;

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;

import cucumber.api.java.en.Given;
import cucumber.api.java.en.Then;
import cucumber.api.java.en.When;

public class AddToList {

    WebDriver driver = null;

    @Given("^I am on Todo site$")
    public void onSite() throws Throwable {
        driver = new ChromeDriver();
        driver.navigate().to("http://localhost");
        System.out.println("on todo site");

    }

    @When("^Enter a task in todo textbox$")
    public void enterTask() throws Throwable {
        driver = new ChromeDriver();
        driver.findElement(By.name("task")).sendKeys("Test Unit Using Cucumber");
        ;
        System.out.println("task entered");
    }

    @Then("^I click on add to todo$")
    public void clickAddToTodo() throws Throwable {
        driver = new ChromeDriver();
        driver.findElement(By.xpath("//input[@value='Add to Todo' and @type='button']"));
        System.out.println("add button clicked");

    }

}

Correct way of unit-testing classes that use DateTimeOffset objects?

I would appreciate information or examples about how to correctly test code that uses DateTimeOffset instances. I know the tests have to be deterministic.

So, how would one isolate the application from the DateTimeOffset classes ? I would, of course, like to be able to use a fake DateTimeOffset.Now, etc.

In my tests, should I be using something like: var myDate = new DateTimeOffset(2016, 3, 29, 12, 20, 35, 93, TimeSpan.FromHours(-3));

Or would I instead be using a wrapper class like MyCustomDateTimeOffset ? Should I not use DateTimeOffset at all in my code and use a wrapper instead?

How to Test a Swift Script that's not in a Xcode Project

I started writing a basic swift script file and compiling it as:

swiftc *.swift -o exporter

Script doesn't do anything right now, but will have export, import_files etc. functions. I'm trying to find how I can test this basic script file?

Ideally, I'd have an exporter_test.swift file which I have all my unit tests. Also, it's important to note that I'm fairly new to swift, so not sure if this is doable outside of Xcode.

exporter.swift

func export() -> {
    // will do exporting of data that's been imported in import_files
}

func import_files(files: [String]) -> {
}

Jersey test - Http Delete method with JSON request

I am using Jersey Test to test Rest service DELETE method:

@DELETE
@Path("/myPath")
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
public MyResponse myMethod(MyRequest myRequest) {

I have tried the example below and other methods:

Entity<MyRequest> requestEntity = Entity.entity(new MyRequest(arg1, arg2), MediaType.APPLICATION_JSON);
target(MY_URI).request(MediaType.APPLICATION_JSON).method("DELETE", requestEntity)

But it does not work.

How to make Http Delete in Jersey test?

How to generate androidtest apk to use in spoon

I am using spoon library for testing android app. I need two apk files

  • Main application
  • TestApk

I want to ask how to generate that Test.apk ? I have written the test class in my main application androidTest section, i haven't created separate project for tests. now how i am supposed to make my test working ? Guys provide any good link showing how to use spoon.

Waits randomly not working in selenium

We have some UI tests written in selenium running with Browserstack on TeamCity. These tests randomly fail because (in my opinion) the wait.Untils are not working correctly as the error always is 'could not click element ... because element would receive the click. As you can see in the code i apply multiple waits and still it randomly ignores them.

driver.FindElement(By.XPath("//input[@value='Login']")).Click();

        //wait for the page to be loaded, check the title and open a booking
        wait.Until(ExpectedConditions.ElementExists(By.LinkText("To be completed")));
        wait.Until(ExpectedConditions.ElementToBeClickable(By.LinkText("To be completed")));
        Assert.AreEqual("Booking Overview", driver.Title);
        wait = new WebDriverWait(driver, TimeSpan.FromSeconds(60));
        wait.Until(ExpectedConditions.ElementToBeClickable(By.XPath("//button")));
        wait.Until(ExpectedConditions.ElementToBeClickable(By.LinkText("To be completed")));
        driver.FindElement(By.LinkText("To be completed")).Click();

        //wait for step 2 to load
        wait.Until(ExpectedConditions.ElementExists(By.XPath("//div[@id='WiredWizardsteps']/div[2]/div/form/div/div[2]/label/span")));
        wait.Until(ExpectedConditions.ElementToBeClickable(By.XPath("//div[@id='WiredWizardsteps']/div[2]/div/form/div/div[2]/label/span")));
        //verify we are at step 2
        var step2 = driver.FindElement(By.XPath("//ul[contains(@class, 'steps')]/li[contains(@class, 'active')]"));
        Assert.AreEqual("2", step2.GetAttribute("data-target"));
        //click the radiobutton with a movetoelement
        var option = driver.FindElement(By.XPath("//div[@id='WiredWizardsteps']/div[2]/div/form/div/div[2]/label/span"));
        new Actions(driver).MoveToElement(option).Click().Perform();
        //retry programmatically
        driver.FindElement(By.XPath("//div[@id='WiredWizardsteps']/div[2]/div/form/div/div[2]/label/span")).Click();
        //wait for the textbox to appear
        wait.Until(ExpectedConditions.ElementToBeClickable(By.Name("commodityNonOperative")));

anybody has a suggestion or had the same problems please let me know.

Gatling pass parameter throw scenario

I have a test which include 3 scenarios, The first one creates a entity type. The response give me the id of this type of entity (the one I want to save). The scenario 2 creates lot of entities of this type The scenratio 3 deletes the entity type created in one (So I need the id).

I am quite new with galting but I understood I can't use session because of the scope. So I wanted to store the id in a global variable.

This is my code, but the formId variable is not properly set

 .check(status.is(200), jsonPath("$..formId").saveAs("formId"))
    //.check(status.is(200),jsonPath("//formId").saveAs("formId"))
  ).exec { session =>
     EntityResourceFixtures.formId = "${formId}"
     session
   }

Any idea ?

How to keep your tests small while using data providers?

I am testing the endpoints/API of a web application. I have multiple small tests that depend on the return values of the preceding tests. Some of the tests even depend on side effects that are made by the preceding tests. Here's an example of how it goes (the numbered list items represent individual test cases):

  1. Make a request to an endpoint and assert the http code is 200, return the response
  2. Parse the response body and do some assertions on it, return the parsed response body
  3. Do some assertions on the debug values of the parsed response body
  4. Make a new request to another endpoint and assert the http code is 200, return the response
  5. Parse the response body and assert that the side effects from test 1 actually took place

As you can see the tests sort of propagate from test 1, and all the other tests depend on its return value or side effects.

Now I want to execute these tests with data from a data provider to test the behavior with multiple users from our application. According to the phpunit documentation, this is not possible. From the docs:

When a test depends on a test that uses data providers, the depending test will be executed when the test it depends upon is successful for at least one data set. The result of a test that uses data providers cannot be injected into a depending test.

Just to be clear, what I want is for test 1 to execute x number of times with y values, and have all the other tests propagate its return value or check its side effects each time. After some googling, the only solution that comes to mind is to put all the tests into one single test to remove all dependencies. However I have multiple test suites with this behavior, and some of the tests would get really big and unwieldy.

So, how can I keep the dependencies between small test cases while using data providers? I'm using php 5.5 along with Silex 1.3 and phpunit 4.8

Here's an example in case I was unclear:

public function testValidResponse()
{
    $client = $this->createClient();
    $client->request('POST', '/foo', $this->params);
    $this->assertEquals(200, $client->getResponse()->getStatusCode());
    return $client->getResponse();
}

/**
 * @depends testValidResponse
 */
public function testStatusIsOk(Response $response)
{
    $json = json_decode($response->getContent(), true);
    $this->assertTrue($json['status']);
    return $json;
}

/**
 * @depends testStatusIsOk
 */
public function testExecutionTime($json)
{
    $this->assertLessThan($this->maxExecutionTime, $json['debug']['executionTimeSec']);
}

/**
 * @depends testValidResponse
 */
public function testAnotherEndpointValidResponse()
{
    $client = $this->createClient();
    $client->request('GET', '/bar');
    $this->assertEquals(200, $client->getResponse()->getStatusCode());
    return $client->getResponse();
}

/**
 * @depends testAnotherEndpointValidResponse
 */
public function testSideEffectsFromFirstTest(Response $response)
{
    // ...
}

Using input box with element by.id Protractor Testing error

I'm trying to use ids with my input box's within my login page but I get the following error with Protractor:

Failed: No element found using locator: By css selector, *[id="signin--username"])

Here is my log-in.js

var logIn = function() {
    this.navigate = function() {
        browser.get(browser.params.server);
    };
    this.usernameInputBox = element(by.id('signin--username'));
    this.passwordInputBox = element(by.id('signin--password'));
    this.dontHaveAnAccountButton = element(by.id('signin--no-account-question'));

    this.logInButton = element(by.id('signin--log-in'));
    this.Modal = element(by.css('.modal-dialog'));
    this.ModalButton = element(by.xpath('//*[@id="app"]/div[3]/div/div/form/div[3]/button'));
};

module.exports = new logIn();

Snippet from log-in.html

<div class="form-group">
  <div class="input-group input-group-lg">
    <span class="input-group-addon">
       <span class="glyphicon glyphicon-user"></span>
        </span>
          <input type="text"
           id="signin--username" 
           class="form-control"
           placeholder="{{'username' | translate}}" 
           ng-model="username"
           name="username" 
           required
           autofocus data-autofill
           >
   </div>
</div>

Any help much appreciated! Thanks.

CodeCeption - passing options to a helper

Is it possible to pass a config parameter to helper class which extends the \Codeception\Module class. I have the following case, I want in my api.suite.yml config file in the module enabled section to set for example `Helper\Api' and to set its onw cofig property.

My idea is to has different environment with different config properties. Is that possible?

mardi 29 mars 2016

How to deal with chai spies `with` in recursion calls?

So if I want to check called.with() but testable function calls itself after the first call several times, how can I check if it calls itself correctly?

Example:

function checkMe (name, entity) {
  if (Array.isArray(entity)) {
    return entity.map(iteratee => checkMe(name, iteratee));
  }

  return output(name, entity);
}

function output (name, data) {
    return { name: name, data: data };
}

checkMe('match', [
    {
        prop: true,
        non: false
    },

    {
        prop: true,
        non: false
    },

    {
        prop: true,
        non: false
    }
]);

I would like to test it like this:

expect(checkMeSpy).to.have.been.called.with(
    ['match', { prop: true, non: false }],
    ['match', { prop: true, non: false }],
    ['match', { prop: true, non: false }]
);   

Accessing Variable of an Instanced Class

Firstly, I apologise for the bad title naming sense. I am unsure as to how to phrase it correctly.

My problem is that I have four given classes, which I call A, B, C and D for simplicity.

  • A contains a method which calls B to do something.
  • B is a singleton, and provides A with an instance of it.
  • B has a variable, C* c.
  • C is basically a table containing many D.

I want to test that D has the correct information stored in it, passed through to the program via A.

In code form, it would look something like this (Note that they are originally separated into .h and .cpp, but I combined them and extracted relevant information, so it may seem weird):

Class A:

#pragma once

#include "B.h"

public:
    bool someFunction(std::string sampleString) {
        B::getInstance()->doSomething(sampleString);
    }

Class B:

#pragma once

#include "C.h"

private:
    static B* instance;
    C* c;

public:
    static B* getInstance();

    bool doSomething(std::string sampleString) {
        // Here, I assume that C only contains one D, so dNum is actually 1
        // In reality, the sampleString contains dNum to be extracted
        int dNum = 1;
        neededD = c->getD(dNum);

        // saves data from sampleString in neededD
    }

Class C:

#pragma once

#include "D.h"

private:
    std::vector<D*> d;

public:
    D* getD(int dNum) {
        D* required D;
        requiredD = d->at(dNum);
        return d;
    }

Class D:

#pragma once

private:
    // A bunch of vectors to store data about a particular D

public:
    // A bunch of get/set methods for the vectors
    // one of them is getString()

I am supposed to call the method in A, and check that D contains the correct data which I have passed into the method in A. However, I am completely unsure as to how to do this.

I have tried the following in my test file:

#include "A.h"

public:
    A* a;
    B* b;
    C* c;

    TEST_METHOD(Test1) {
        std::string testString = "abc";
        a->someFunction(testString);

        // the following doesn't work
        std::string checkString = (c->getD(1))->getString();
        Assert::IsTrue(testString == checkString);
    }

I don't even understand what I am typing in the test method I gave above with regards to accessing C to get the D I want, but I hope it provides some explanation on what I'm trying to achieve here. Basically, to test that the D has the correct information I passed into A.

I have taken a look at stubs and mocks, but I do not understand how to use them in this situation (I have actually never utilised them).

Once again, sorry if my explanation isn't clear. I am very weak at C++, and thus do not really know how to explain things. As such, even searching for similar questions proved to be impossible, so I apologise if this question has been asked before.

Thank you very much for any assistance you may provide!

implementing another class to create the first 100 prime numbers

ok I'm a little mind blown from an assignment i have to do. We have to implement a sequence class from http://ift.tt/25vn9fB (the chapter 10 example) to make a new class called PrimeSequence and it has to right align the first 100 prime sequence numbers. I don't understand the point of implementing the other class and i did it but i know im not following the assignment rules because i dont understand what im supposed to implement from the other class and i also use nothing from the other class. I'm not sure on what i have to do

Sequence Class

public interface Sequence 
{   
    int next();
}

PrimeSequence Class

public class PrimeSequence implements Sequence
{

public PrimeSequence()
{

}

public boolean isPrime(int x)
{
    for (int start = 2; start <= Math.sqrt(x); start++)
    {
        if (x % start == 0) 
        {
            return false;
        }
    }
    return true;
}

    public int next()
    {

    }
}

PrimeSequenceTester

public class PrimeSequenceTester {


public static void main(String[] args) 
{
    PrimeSequence prime = new PrimeSequence();

    int currentNumber = 2;
    int primesFound = 0;

    while (primesFound < 100) {
        if (prime.isPrime(currentNumber)) 
        {
            primesFound++;

            System.out.printf("%4s",currentNumber + " ");
            if (primesFound % 10 == 0) 
            {
                System.out.println();
            }
        }

        currentNumber++;
}
}

Is there a better way to distribute Android alpha versions to testers?

I'm currently using the Android store to distribute alpha versions of my app to testers. It works but the latency is unpredictable and long. Sometimes it's two hours. Sometimes the app won't download till they uninstall it and try reinstalling. Sometimes it becomes available on one device but not another.

I completely understand the need for Google to check for viruses and schedule downloads for production releases. But for a handful of testers? Testers need it now, not tomorrow.

Is there a work around for Google's latency? Is there a better approach?

How to manipulate java.util.Date date() from outside the java binary?

I have a java app using date() to obtain the current date, and I'm trying to manipulate the date that it is obtaining from date() for testing purposes.

I've tried setting the system date(on Windows) to a date in the past for example 1/1/2014, but the java app seems to keep getting the realtime date from somewhere other than the system date.

Is it possible to manipulate the date and if so, how can I manipulate the date that the date() function returns from outside the binary?

clickOnMenuItem clicks on wrong button robotium

I starting using robotium and I'm trying to use some test cases. I have a menu and I want to click on the itens in this menu. Well, it worked fine for all of the itens but one. In this one case, it actually clicks on something but opens the wrong activity

`solo.clickOnMenuItem("Produção");
 if(solo.waitForActivity(ProducaoActivity.class))
    telaProducao();`

Well, it should open the ProducaoActivity but in fact it's opening other activity. For all of the others itens it opens the right activity

c# CREAM mutation testing: Is there anyway to stop generating new project folders every time?

I am currently using CREAM mutation testing tool for some c# project.And every time I run a test, it will generate a new project folder from the original one, and it fills up my hard drive extremely fast. For example, it will have Project_AOR_1... until Project_AOR_100 folders. I am wondering is there anyway to not generate a new project folder every time? Also, is there a way to save the 'show' data which records the lines and codes mutated? Thank you.

Ruby minitest and exit status

Working on a ruby minitest script on linux, every time it found any "failure" or "error" (in the test, not the running script) the ruby script itself returns 1 as exit status in bash

output follows:

$ ruby test.rb; echo $?

Run options: --seed 30930

Running: ...

...F....

Finished in... 8 runs, 9 assertions, 1 failures, 0 errors, 0 skips

generating ci files

1

The problem is, as I'm triggering the tests from Fabric, it may exit every time each test find an error or failure. Is it an expected behaviour I should expect for the test? or is something wrong there with the exit status? (if the script ends with no error, nor failures, it exits 0)

I would like to run fabric without env.warn_only = True to mark tests as failed if the test script may be broken.

Is there a way to the test to not change the exit status when failed (but succeed in running)? Is that an expected test way and I have to adjust my fabric script?

Thanks!

Play Framework and Slick: testing database-related services

I'm trying to follow the most idiomatic way to having a few fully tested DAO services.

I've got a few case classes such as the following:

case class Person (
  id        :Int,
  firstName :String,
  lastName  :String
)

case class Car (
  id      :Int,
  brand   :String,
  model   :String
)

Then I've got a simple DAO class like this:

class ADao @Inject()(protected val dbConfigProvider: DatabaseConfigProvider) extends HasDatabaseConfigProvider[JdbcProfile] {
  import driver.api._

  private val persons = TableQuery[PersonTable]
  private val cars = TableQuery[CarTable]
  private val personCar = TableQuery[PersonCar]

  class PersonTable(tag: Tag) extends Table[Person](tag, "person") {
    def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
    def firstName = column[String]("name")
    def lastName = column[String]("description")
    def * = (id, firstName, lastName) <> (Person.tupled, Person.unapply)
  }

  class CarTable(tag: Tag) extends Table[Car](tag, "car") {
    def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
    def brand = column[String]("brand")
    def model = column[String]("model")
    def * = (id, brand, model) <> (Car.tupled, Car.unapply)
  }

  // relationship
  class PersonCar(tag: Tag) extends Table[(Int, Int)](tag, "person_car") {
    def carId = column[Int]("c_id")
    def personId = column[Int]("p_id")
    def * = (carId, personId)
  }

  // simple function that I want to test
  def getAll(): Future[Seq[((Person, (Int, Int)), Car)]] = db.run(
    persons
      .join(personCar).on(_.id === _.personId)
      .join(cars).on(_._2.carId === _.id)
      .result
  )  
}

And my application.conf looks like:

slick.dbs.default.driver="slick.driver.PostgresDriver$"
slick.dbs.default.db.driver="org.postgresql.Driver"
slick.dbs.default.db.url="jdbc:postgresql://super-secrete-prod-host/my-awesome-db"
slick.dbs.default.db.user="myself"
slick.dbs.default.db.password="yolo"

Now by going through Testing with databases and trying to mimic play-slick sample project I'm getting into so much trouble and I cannot seem to understand how to make my test use a different database (I suppose I need to add a different db on my conf file, say slick.dbs.test) but then I couldn't find out how to inject that inside the test.

Also, on the sample repo, there's some "magic" like Application.instanceCache[CatDAO] or app2dao(app).

Can anybody point me at some full fledged example of or repo that deals correctly with testing play and slick?

Thanks.

How to make a Unit test with several data sources?

guys I have a method and I want to test it with two data sources (two lists in my case). Can someone help and explain how to make it right? Should I use attribute TestCaseSource and how?

 public void TestMethodIntToBin(int intToConvert, string result)
    {
        Binary converter = new Binary();
        string expectedResult = converter.ConvertTo(intToConvert);
        Assert.AreEqual(expectedResult, result);
    }

public List<int> ToConvert = new List<int>()
    {
        12,
        13,
        4,
        64,
        35,
        76,
        31,
        84
    };
    public List<string> ResultList = new List<string>()
    {
        "00110110",
        "00110110",
        "00121011",
        "00110110",
        "00110110",
        "00100110",
        "00110110",
        "00110110"
    };

How to make Python absolute import lines shorter?

this is my project structure (just an example to illustrate the problem):

.
├── hello_world
│   ├── __init__.py
│   └── some
│       └── very_nested
│           └── stuff.py
└── tests
    └── test_stuff.py

The test_stuff.py file (for py.test):

from hello_world.some.very_nested.stuff import Magic
from hello_world.some.very_nested.other_stuff import MoreMagic

def test_magic_fact_works():
    assert Magic().fact(3) == 6

# ...

Is there any way how to make the import lines shorter? They get too long in the real project.

For example, this would be nice, but it doesn't work :)

import hello_world.some.very_nested as vn
from vn.stuff import Magic
from vn.other_stuff import MoreMagic

I cannot use relative imports (I assume) beucase the tests are not inside the package. I could move them, but is it possible without changing project structure?

Comparing garbage generation in Java

I'm looking for tools that will let me measure and compare the amount of garbage generated by certain code on the JVM.

I'm aware that tools like YourKit will allow tracking of allocations, but using that tool is very clicky-clicky. (Change code, run with agent, enable tracking, run actual code, take snapshot, etc) It requires a lot of my time/work per iteration.

I'm ideally looking for something like a microbenchmark suite, where it's easy to tweak something and take a new measurement. But instead of measuring speed, I'm interested in measuring allocations.

Test driver package practicalmeteor:mocha missing `runTests` export

I've just updated to meteor 1.3 and have been trying to use mocha for testing. I haven't used it before, so I'm not sure if I'm implementing it wrong, but I get the error

Test driver package practicalmeteor:mocha missing `runTests` export

in the chrome debug window when I run my app with

meteor test --driver-package practicalmeteor:mocha

I don't think the issue is with my tests, since the crash is happening as the app is starting. I do get the confirmation in the cmd that my app is running

=> App running at: http://localhost:3000/

Can we use sonarQube to completely replace the custom-build security testing scenarios?

I see that sonarQube can be used for measuring code quality and for finding security vulnerabilities. I am having hard time deciding whether to replace the custom-build BDD security testing scenarios with sonarQube testing for my backend services. BDD testing usually takes longer than the sonarQube analysis. I would appreciate your suggestions on this.

Thanks

Run gradle task within the test environment

I am currently trying to run some protractor tests within the gradle test environment. What I would like to do is:

  • Start the test environment.
  • Start selenium.
  • Start protractor.
  • End the test environment.

Has anyone got any experience doing something similar? I have tried a couple of ways but it seems that when I create a task of type Test and run it, because I am not specifying a Java/Groovy sourceset for the test (because I am not running Java/Groovy tests), the test task stops as soon as the compile task completes.

There are a few blog posts out there explaining how to run protractor with gradle but they do so without a fully running springboot application.

Any help with this would be much appreciated.

I need more test cases for simple web forms based division calculator

Here is my approach:

The simple division of 2 integers test use cases

The WebForm contains:

TextBox1 - a TextBox2 - b TextBox3 - c Button1 - Clear Button2 - Divide

c = a / b

The test cases:

1) Check if a is valid - it can accept only integer numbers, not decimals, characters or special symbols

2) Check if b is valid - it can accept only integer numbers, not decimals, characters or special symbols

3) Check division with positive numbers

4) Check division with negative numbers

5) Check division if a = 0

6) Check division if b = 0, DivideByZeroException

7) Check division the largest possible number Int32.MaxValue by smallest possible value Int32.MinValue + 1

8) Check division if all 3 text boxes are empty

9) Check division if first (a) text boxe is empty

10) Check division if second (b) text boxe is empty

11) Check if c as a result has value with 2 decimals

12) Check division by very large positive number

13) Check division by very large negative number

14) Check division positive number by negative number

15) Check division negative number by positive number

16) Check if "Clear" deletes the content of all text boxes

Could you suggest, please, what else?

How can I get test information with SonarQube API?

Now I'm trying to obtain test information with SonarQube API. Concretely, I'd like to obtain information by using api/tests/list.(http://ift.tt/1SjsEnU)

My request url is: http://localhost:9000/api/tests/list?key=org.apache.httpcomponents:httpclient&testFileId=5a3ed43a-5ae6-4154-a5bb-64c6134c69af However, I got following reply:

{"paging":{"pageIndex":1,"pageSize":100,"total":0},"tests":[]}

I DID set the BROWSER permission, but still I couldn't correct reply. What should I do to solve this problem?

EvoSuite - Parameters For Getting Most Code Coverage

I'm generating unit tests with EvoSuite and would like to get as close to 100% code coverage from the resulting unit tests as possible. What are the best command line options/parameters to set to accomplish this?

RestAssured testing without running Tomcat

I have REST web service which needs to be tested. I am using Mockito for mocking DAO classes and RestAssured for testing REST methods through URI. Is there any way to test REST service without running it separately with Tomcat? Or how to run application on Tomcat with mocked classes before test cases?

How to customize code coverage summary in TFS 2015

We are using TFS 2015 vNext Build to manage our CI build. It is quite easy to enable the test coverage in the visual studio test task. But in the build summary, it only give a overall block overage percentage. For each assembly, the summary only indicate the covered block and line. For example: enter image description here

It is quite time consuming to download and open the .coverage file to get the detail block covered percentage for each assembly when assembly count is huge. Is there any way that we can configure the summary to show the block covered percentage for each assembly?

Wait in mobile application testing

I use Robotium, Appium and UIAutomator for testing Android applications. Sometimes I use waits or sleeps inside the code. I wonder if there is an upper limit that the system waits for? For example what is the range of parameters that can be sent to sleep() method?

Tutorials for angularjs unit-testing

Im trying to learn some AngularJS testing frameworks,but Im very confused with the terms : Mocha,Jasmine,Karma ..etc'

1.What the difference between that unit-test infrastructures? (Karma,Jasmine,Mocha) 2.There is any good practical tutorial that helps to learn how to write tests?

thanks

lundi 28 mars 2016

Unable to ren sikuli ide click(image) ?

i am have my VB.net Project exe. i am start exe using Sikulix ide.

enter image description here SikulixIDE1.1.0 Untiled

#Click application logo to start 
doubleClick("1459230114375.png")
#Login screen Enter UserName,Password,click ok
type("1459230089151.png","admin")
type("1459230150826.png","")
click("1459229716030.png")

run this code get error Message:

[error] RobotDesktop: checkMousePosition: should be L(113,545)@S(0)[0,0 1280x768] but after move is L(706,63)@S(0)[0,0 1280x768] Possible cause in case you did not touch the mouse while script was running: Mouse actions are blocked generally or by the frontmost application. You might try to run the SikuliX stuff as admin.

[error] RobotDesktop: checkMousePosition: should be L(575,376)@S(0)[0,0 1280x768] but after move is L(600,353)@S(0)[0,0 1280x768] Possible cause in case you did not touch the mouse while script was running: Mouse actions are blocked generally or by the frontmost application. You might try to run the SikuliX stuff as admin.

[error] RobotDesktop: checkMousePosition: should be L(715,402)@S(0)[0,0 1280x768] but after move is L(595,350)@S(0)[0,0 1280x768] Possible cause in case you did not touch the mouse while script was running: Mouse actions are blocked generally or by the frontmost application. You might try to run the SikuliX stuff as admin.

please help me solved my problem.

django test the correct template used: with self.assertTemplateUsed()

I used to see the following tests in Django:

with self.assertTemplateUsed('<someTemplate>'):
    response = self.client.get('<someURL>')
    self.assertEqual(response.status_code, 200)

Question:

Since we have already had the with part, is it necessary to test the status_code? In other words, is the final statement redundant?

Can an employee change his domain from Testing to Development? [on hold]

I am curious to know whether an employee can change domain from testing to development. I have scenario. One of my friend has 2 years of experience in software development. But after that he got a job in testing. He accepted the offer (since he had no other options). My question is after few years (say after 2 years in testing), can he get a job as a developer. He is more interested in development rather than testing. But due some unavoidable situation he became a tester.

Automate consumer-driven contract testing?

At my work we've taken the first step in going down the road of microservices. We're in the position where we control the development of both the service and the application which uses the service.

I've read about a popular testing strategy called "Consumer-Driven Contract Testing" in which consumers communicate the expectations they have of the provider, and based on this communication, a "contract" is created, and the provider ensures that this contract is not broken after making changes.

The concept makes perfect sense, and I've read about tools such as Pact which allow this process to be automated - consumer tests mock out the real service, make a request, assume a particular response, and then run unit tests against those mocked responses; if those unit tests pass, a "Pact" file is created. The provider can then pull in the Pact file, replay the requests, and compare its response with the response the consumer was expecting. The problem is... the .NET implementation only supports JSON responses, and our service sends responses in XML (Atom items), which means that I can't check if certain fields/attributes exist; the whole response needs to match. So the provider tests will instantly break any time a property is added/removed to the response. Very brittle. And it also appears the Pact implementations for javascript and .NET have pretty much been abandoned at this point.

My question is... Given that we control development of both the consumer and the provider, would it make more sense to forgo the option of using tools like Pact in favor of manually writing unit tests in the provider test suite that verify responses are expected (eg. the provider's XML response contains the fields/properties that the consumer is expecting)? Or are there other more modern (and up-to-date) tools that will automate this whole process?

Running Spock specification test with spring cli

I'm looking at an example from Wall's Spring Boot in action book. It is a simple web application written in groovy. The project is being built, run and tested using Spring CLI without a gradle build file and using a Grabs.groovy file to provide H2 and Thymeleaf dependencies. There are two test classes. The first is a JUnit test and the second is a Spock specification. The JUnit tests file is:

import org.springframework.test.web.servlet.MockMvc
import static org.springframework.test.web.servlet.setup.MockMvcBuilders.*
import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.*
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.*
import static org.mockito.Mockito.*

class ReadingListControllerTest {

    @Test
    void shouldReturnReadingListFromRepository() {
        List<Book> expectedList = new ArrayList<Book>()
        expectedList.add(new Book(
                id: 1,
                reader: "Craig",
                isbn: "9781617292545",
                title: "Spring Boot in Action",
                author: "Craig Walls",
                description: "Spring Boot in Action is ..."
            ))

        def mockRepo = mock(ReadingListRepository.class)
        when(mockRepo.findByReader("Craig")).thenReturn(expectedList)

        def controller = new ReadingListController(readingListRepository: mockRepo)

        MockMvc mvc = standaloneSetup(controller)
                        .build()
        mvc.perform(get("/"))
            .andExpect(view().name("readingList"))
            .andExpect(model().attribute("books", expectedList))

    }

}

and the Spock specification is:

import org.springframework.test.web.servlet.MockMvc
import static org.springframework.test.web.servlet.setup.MockMvcBuilders.*
import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.*
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.*
import static org.mockito.Mockito.*

class ReadingListControllerSpec extends Specification {

  MockMvc mockMvc
  List<Book> expectedList

  def setup() {
    expectedList = new ArrayList<Book>()
    expectedList.add(new Book(
      id: 1,
      reader: "Craig",
      isbn: "9781617292545",
      title: "Spring Boot in Action",
      author: "Craig Walls",
      description: "Spring Boot in Action is ..."
    ))

    def mockRepo = mock(ReadingListRepository.class)
    when(mockRepo.findByReader("Craig")).thenReturn(expectedList)

    def controller = 
        new ReadingListController(readingListRepository: mockRepo)
    mockMvc = standaloneSetup(controller).build()
  }

  def "Should put list returned from repository into model"() {
    when:
      def response = mockMvc.perform(get("/"))

    then:
      response.andExpect(view().name("readingList"))
              .andExpect(model().attribute("books", expectedList))
  }

}

These files are in the tests directory off the root of the project. If I run the JUnit test with the command "spring test tests/ReadingListControllerTest.groovy" the test runs successfully. If I run both tests with the command "Spring test tests", both tests run successfully. However if I run just the Spock test either with the command "spring test tests/ReadingListControllerSpec.groovy" or by removing the ReadingListControllerTest.groovy file and using the command "spring test tests", then I get the following compile error:

ReadingListControllerSpec.groovy: 5: unable to resolve class org.mockito.Mockito
 @ line 5, column 1.
   import static org.mockito.Mockito.*
   ^

I'm not familliar with writing Spock tests so I'm not sure what the problem is.

CNN wont classify my dataset

Im looking at my own dataset of biopsy images, and trying to classify different types of cancers. My CNN learns from the training data (using a small learning rate), but the best it seems to be able to reach is guessing the mode class for all of the data (81/112=72.32% of the data is 1, the rest is 0). Below is a 2 class classification problem, where the data trains on 500 images, and tests on 112. The data trains up to guessing the mode class. These images have been classed by doctors and are official medical records, so a CNN should be able learn the required patterns

Epoch 1/1000
550/550 [==============================] - 99s - loss: 0.7391 - acc: 0.4873 - val_loss: 0.7459 - val_acc: 0.3750
Epoch 2/1000
550/550 [==============================] - 99s - loss: 0.7077 - acc: 0.5255 - val_loss: 0.7363 - val_acc: 0.3839
Epoch 3/1000
550/550 [==============================] - 99s - loss: 0.7243 - acc: 0.5255 - val_loss: 0.7272 - val_acc: 0.4196
Epoch 4/1000
550/550 [==============================] - 99s - loss: 0.6993 - acc: 0.5564 - val_loss: 0.7185 - val_acc: 0.4107
Epoch 5/1000
550/550 [==============================] - 99s - loss: 0.6941 - acc: 0.5655 - val_loss: 0.7105 - val_acc: 0.4375
Epoch 6/1000
550/550 [==============================] - 99s - loss: 0.6774 - acc: 0.5709 - val_loss: 0.7028 - val_acc: 0.4732
Epoch 7/1000
550/550 [==============================] - 99s - loss: 0.6681 - acc: 0.5945 - val_loss: 0.6954 - val_acc: 0.4911
Epoch 8/1000
550/550 [==============================] - 99s - loss: 0.6615 - acc: 0.6109 - val_loss: 0.6887 - val_acc: 0.5268
Epoch 9/1000
550/550 [==============================] - 99s - loss: 0.6487 - acc: 0.6309 - val_loss: 0.6823 - val_acc: 0.5625
Epoch 10/1000
550/550 [==============================] - 99s - loss: 0.6419 - acc: 0.6291 - val_loss: 0.6762 - val_acc: 0.5982
Epoch 11/1000
550/550 [==============================] - 99s - loss: 0.6335 - acc: 0.6491 - val_loss: 0.6705 - val_acc: 0.6250
Epoch 12/1000
550/550 [==============================] - 99s - loss: 0.6210 - acc: 0.6745 - val_loss: 0.6649 - val_acc: 0.6339
Epoch 13/1000
550/550 [==============================] - 99s - loss: 0.6270 - acc: 0.6636 - val_loss: 0.6597 - val_acc: 0.6339
Epoch 14/1000
550/550 [==============================] - 99s - loss: 0.6291 - acc: 0.6527 - val_loss: 0.6549 - val_acc: 0.6607
Epoch 15/1000
550/550 [==============================] - 99s - loss: 0.6195 - acc: 0.6727 - val_loss: 0.6504 - val_acc: 0.6696
Epoch 16/1000
550/550 [==============================] - 99s - loss: 0.6016 - acc: 0.6891 - val_loss: 0.6461 - val_acc: 0.6786
Epoch 17/1000
550/550 [==============================] - 99s - loss: 0.6019 - acc: 0.6964 - val_loss: 0.6423 - val_acc: 0.6875
Epoch 18/1000
550/550 [==============================] - 99s - loss: 0.6086 - acc: 0.7000 - val_loss: 0.6387 - val_acc: 0.6964
Epoch 19/1000
550/550 [==============================] - 99s - loss: 0.5898 - acc: 0.7000 - val_loss: 0.6351 - val_acc: 0.7054
Epoch 20/1000
550/550 [==============================] - 99s - loss: 0.5988 - acc: 0.7018 - val_loss: 0.6319 - val_acc: 0.7054
Epoch 21/1000
550/550 [==============================] - 99s - loss: 0.5904 - acc: 0.7000 - val_loss: 0.6288 - val_acc: 0.7054
Epoch 22/1000
550/550 [==============================] - 99s - loss: 0.5807 - acc: 0.7364 - val_loss: 0.6260 - val_acc: 0.7143
Epoch 23/1000
550/550 [==============================] - 99s - loss: 0.5828 - acc: 0.7327 - val_loss: 0.6235 - val_acc: 0.7143
Epoch 24/1000
550/550 [==============================] - 99s - loss: 0.5756 - acc: 0.7309 - val_loss: 0.6211 - val_acc: 0.7143
Epoch 25/1000
550/550 [==============================] - 99s - loss: 0.5567 - acc: 0.7636 - val_loss: 0.6187 - val_acc: 0.7232
Epoch 26/1000
550/550 [==============================] - 99s - loss: 0.5863 - acc: 0.7455 - val_loss: 0.6167 - val_acc: 0.7232
Epoch 27/1000
550/550 [==============================] - 99s - loss: 0.5789 - acc: 0.7491 - val_loss: 0.6147 - val_acc: 0.7232
Epoch 28/1000
550/550 [==============================] - 99s - loss: 0.5738 - acc: 0.7527 - val_loss: 0.6129 - val_acc: 0.7232
Epoch 29/1000
550/550 [==============================] - 99s - loss: 0.5547 - acc: 0.7655 - val_loss: 0.6112 - val_acc: 0.7232
Epoch 30/1000
550/550 [==============================] - 99s - loss: 0.5773 - acc: 0.7582 - val_loss: 0.6098 - val_acc: 0.7232
Epoch 31/1000
550/550 [==============================] - 99s - loss: 0.5740 - acc: 0.7527 - val_loss: 0.6084 - val_acc: 0.7232
Epoch 32/1000
550/550 [==============================] - 99s - loss: 0.5576 - acc: 0.7655 - val_loss: 0.6070 - val_acc: 0.7232
Epoch 33/1000
550/550 [==============================] - 99s - loss: 0.5727 - acc: 0.7564 - val_loss: 0.6058 - val_acc: 0.7232
Epoch 34/1000
550/550 [==============================] - 99s - loss: 0.5527 - acc: 0.7582 - val_loss: 0.6047 - val_acc: 0.7232
Epoch 35/1000
550/550 [==============================] - 99s - loss: 0.5431 - acc: 0.7709 - val_loss: 0.6037 - val_acc: 0.7232
Epoch 36/1000
550/550 [==============================] - 99s - loss: 0.5584 - acc: 0.7600 - val_loss: 0.6028 - val_acc: 0.7232
Epoch 37/1000
550/550 [==============================] - 99s - loss: 0.5509 - acc: 0.7618 - val_loss: 0.6019 - val_acc: 0.7232
Epoch 38/1000
550/550 [==============================] - 99s - loss: 0.5553 - acc: 0.7655 - val_loss: 0.6012 - val_acc: 0.7232
Epoch 39/1000
550/550 [==============================] - 99s - loss: 0.5572 - acc: 0.7600 - val_loss: 0.6005 - val_acc: 0.7232
Epoch 40/1000
550/550 [==============================] - 99s - loss: 0.5511 - acc: 0.7873 - val_loss: 0.5999 - val_acc: 0.7232
Epoch 41/1000
550/550 [==============================] - 99s - loss: 0.5483 - acc: 0.7727 - val_loss: 0.5993 - val_acc: 0.7232
Epoch 42/1000
550/550 [==============================] - 99s - loss: 0.5489 - acc: 0.7691 - val_loss: 0.5987 - val_acc: 0.7232
Epoch 43/1000
550/550 [==============================] - 99s - loss: 0.5552 - acc: 0.7800 - val_loss: 0.5983 - val_acc: 0.7232
Epoch 44/1000
550/550 [==============================] - 99s - loss: 0.5432 - acc: 0.7745 - val_loss: 0.5979 - val_acc: 0.7232
Epoch 45/1000
550/550 [==============================] - 99s - loss: 0.5382 - acc: 0.7764 - val_loss: 0.5975 - val_acc: 0.7232
Epoch 46/1000
550/550 [==============================] - 99s - loss: 0.5630 - acc: 0.7764 - val_loss: 0.5972 - val_acc: 0.7232
Epoch 47/1000
550/550 [==============================] - 99s - loss: 0.5434 - acc: 0.7745 - val_loss: 0.5969 - val_acc: 0.7232
Epoch 48/1000
550/550 [==============================] - 99s - loss: 0.5538 - acc: 0.7836 - val_loss: 0.5967 - val_acc: 0.7232
Epoch 49/1000
550/550 [==============================] - 99s - loss: 0.5596 - acc: 0.7745 - val_loss: 0.5965 - val_acc: 0.7232
Epoch 50/1000
550/550 [==============================] - 99s - loss: 0.5467 - acc: 0.7727 - val_loss: 0.5963 - val_acc: 0.7232

Do let me know if more information is needed

How can I make a list that will contain all elements that have given xpath?

I try to learn how to work with native Android application using Appium and Java.

I have a structure:

Class B contains:
class A: element_1
class A: element_2

I need to get all element that have class B and put them into a list.

How do I do list of child elements in Java using Appium for Android native application?

Writing a test suite in scala to compare output of java programs

So how would i go abouts building a test suite in scala for my problem that follows:

So i have a list of java programs. i want to pass each of the java programs into my scala program which does something to the program, then outputs a new version of it. Then runs the the old and new java program and check whether the outputs are the same?

Would anyone be able to help?

Best way to use Jasmine to test Angular Controller calls to Services with Promise return

After a week looking for a good answer/sample, I decided to post my question.

I need to know how is the best way to code and test something like this:

Controller

// my.controller.js
(function () {

  'use strict';

  angular.module('myApp.myModule').controller('Awesome', Awesome);

  function Awesome($http, $state, AwesomeService) {

    var vm = this; // using 'controllerAs' style

    vm.awesomeThingToDo = awesomeThingToDo;

    function awesomeThingToDo() {
      AwesomeService.awesomeThingToDo().then(function (data) {
        vm.awesomeMessage = data.awesomeMessage;
      });
    }
  }
})();

Service

// my.service.js
(function () {
  'use strict';

  angular.module('myApp.myModule').factory('AwesomeService', AwesomeService);

  function AwesomeService($resource, $http) {

    var service = {
      awesomeThingToDo: awesomeThingToDo
    }

    return service;

    function awesomeThingToDo() {

      var promise = $http.get("/my-backend/api/awesome").then(function (response) {
          return response.data;
        });

      return promise;
    }
  }
})();

My app works OK with this structure. And my Service unit tests are OK too. But I don't know how to do unit tests on Controller.

I tried something like this:

Specs

// my.controller.spec.js
(function () {
  'use strict';

  describe("Awesome Controller Tests", function() {

    beforeEach(module('myApp.myModule'));

    var vm, awesomeServiceMock;

    beforeEach(function () {
      awesomeServiceMock = { Is this a good (or the best) way to mock the service?
        awesomeThingToDo: function() {
          return {
            then: function() {}
          }
        }
      };
    });

    beforeEach(inject(function ($controller) {
      vm = $controller('Awesome', {AwesomeService : awesomeServiceMock});
    }));

    it("Should return an awesome message", function () {
      // I don't know another way do to it... :(
      spyOn(awesomeServiceMock, "awesomeThingToDo").and.callFake(function() {
        return {
          then: function() {
            vm.awesomeMessage = 'It is awesome!'; // <-- I think I shouldn't do this.
          }
        }
      });

      vm.awesomeThingToDo(); // Call to real controller method which should call the mock service method.

      expect(vm.awesomeMessage).toEqual('It is awesome!'); // It works. But ONLY because I wrote the vm.awesomeMessage above.

    });

  });
})();

My app uses Angular 1.2.28 and Jasmine 2.1.3 (with Grunt and Karma).

Android instrumentation test failing when opening an url and device have multiple apps to open it

I have an Android app with some instrumentations test using Espresso 2.0 where I open an url in a web browser by tapping different UI element. The tests work fine when the device has only one browser app or when a default one is already selected. The problem comes when the device has multiple browser apps and no one is the default one, in that case when the test tries to open the url, a dialog to choose the browser app will be displayed and the test will be stuck there forever. It happens the same if I run the same instrumentation tests on the Cloud Test Lab.

Have anyone faced this problem and have a workaround to make it work?

This an example of one of the tests

 @Test
public void testAttributions() throws Exception {
    List<Attribution> attributions = activityRule.getActivity().mAttributionsProvider.getAttributions();

    onView(withId(R.id.list_attributions)).check(matches(EspressoUtils.withRecyclerViewChildCount(attributions.size())));

    for (int i = 0; i < attributions.size(); i++) {
        Attribution attribution = attributions.get(i);
        onView(withId(R.id.list_attributions))
                .perform(RecyclerViewActions.actionOnItemAtPosition(i, EspressoUtils.checkAttributionView(attribution.getLibrary(),
                        attribution.getAuthor(), attribution.getDescription(), attribution.getLicense())));
    }
}

It's just a recyclerView with cardView items and when you click on each one of them it opens an url using this code

/**
 * Opens an url in a web browser
 *
 * @param context The application's environment
 * @param url     Url to open
 * @throws BrowserNotFoundException If a web browser is not installed on the device
 **/
public static void openUrl(Context context, String url) throws BrowserNotFoundException {
    Intent intent = new Intent(Intent.ACTION_VIEW, Uri.parse(url));
    if (!isIntentAvailable(context, intent)) {
        throw new BrowserNotFoundException();
    }

    context.startActivity(intent);
}

Thanks!

CI with mocha and express

I want start tests on each get request and store status to local db. Tests starts only on first request

var express = require('express');
var path = require('path');
var Mocha = require('mocha');
var morgan = require('morgan');
var bodyParser = require('body-parser');

var app = express();


app.use(morgan('dev'));
app.use(bodyParser.json());

app.get('/api/github/serviceX', function (req, res) {

   var mocha = new Mocha({
       timeout:60000,
   });

   mocha.addFile('test/serviceX.js');
   var passed = [];
   var failed = [];

   mocha.run(function () {

    console.log(passed.length + ' Tests Passed');
    passed.forEach(function(testName){
        console.log('Passed:', testName);
    });

    console.log("\n"+failed.length + ' Tests Failed');
    failed.forEach(function(testName){
        console.log('Failed:', testName);
    });


      store("serviceX", [passed, failed]);

     res.send(200, format(passed, failed);
   }).on('fail', function(test){
      failed.push(test);
   }).on('pass', function(test){
      passed.push(test);
   });
});

First request runs good

4 passing (10s) 4 Tests Passed 0 Tests Failed

Second and next requests does not starts tests

0 passing (0ms)

0 Tests Passed

0 Tests Failed

What is the difference between statements, branches, functions, lines regarding code coverage? [on hold]

I get that branches are related to the conditionals (every time you have an if block you need to test the else to get 100% full coverage but I'm not sure how the other 3 differ

Parallel nosetests in python: IOError: [Errno 11] Resource temporarily unavailable

I'm having a problem with parallel nose tests. When I run the tests, I got this error :

nose tests -s --processes=4
.
.
.
IOError: [Errno 11] Resource temporarily unavailable

Good dashboards to look for betas [on hold]

Could you advise any good dashboards to search for beta-testers? We're introducing a new tool and would like to attract some testers. Just to test functionality and UI.

getting No message given error in my Rails test

there are two models user and resident .. and both has one to one relationship (has_one) btw them. this is the test for the user(user-test.rb)

I m getting error in this test :

require 'test_helper'

class UserTest < ActiveSupport::TestCase

  def setup
    @resident = residents(:res1)
    @user = @resident.build_user(roll_number: "101303110", email: "user@example.com",password: "foobar", password_confirmation: "foobar")
  end
  test "should be valid" do
    assert @user.valid?
  end

  test "roll_number should be present" do
    @user.roll_number = "     "
    assert_not @user.valid?
  end

  test "email should be present" do
    @user.email = "     "
    assert_not @user.valid?
  end
  test "email should not be too long" do
    @user.email = "a" * 244 + "@example.com"
    assert_not @user.valid?
  end

  test "email validation should accept valid addresses" do
    valid_addresses = %w[user@example.com USER@foo.COM A_US-ER@foo.bar.org
                         first.last@foo.jp alice+bob@baz.cn]
    valid_addresses.each do |valid_address|
      @user.email = valid_address
      assert @user.valid?, "#{valid_address.inspect} should be valid"
    end
  end

  test "email validation should reject invalid addresses" do
    invalid_addresses = %w[user@example,com user_at_foo.org user.name@example.
                           foo@bar_baz.com foo@bar+baz.com]
    invalid_addresses.each do |invalid_address|
      @user.email = invalid_address
      assert_not @user.valid?, "#{invalid_address.inspect} should be invalid"
    end
  end

  test "email addresses should be unique" do
    duplicate_user = @user.dup
    duplicate_user.email = @user.email.upcase
    @user.save
    assert_not duplicate_user.valid?
  end

  test "password should be present (nonblank)" do
    @user.password = @user.password_confirmation = " " * 6
    assert_not @user.valid?
  end

  test "password should have a minimum length" do
    @user.password = @user.password_confirmation = "a" * 5
    assert_not @user.valid?
  end
  test "authenticated? should return false for a user with nil digest" do
    assert_not @user.authenticated?(:remember, '')
  end

end

After executing the test,error came is as follows :

 1) Failure:
ResidentTest#test_should_be_valid [/Users/nischaynamdev/RubymineProjects/hostel_mess_pay/test/models/resident_test.rb:9]:
Failed assertion, no message given.

18 runs, 25 assertions, 1 failures, 0 errors, 0 skips

user Model

the link for user model is as follows : Pls help me passing this test,I m trying since 30 minutes !

Logging in automation testing

Is there any reason to use logging in automated tests? I am asking because I have understanding that test must be readable and you shouldn't use any logging to bloat the code. It is also used to understand what is going on in app, so if it fails I know why (assert message) and if not - ok, I don't care what is in the test.

Thank you in advance.

dimanche 27 mars 2016

Is it possible to test my application with two iPads in XCode?

Let's say I'm writing a chat application, is it possible to connect two of my iPads to my Mac and somehow be used in testing to send each other data? (Instead of using the simulator?)

Java Class Path issue with Randoop

I'm using Randoop, the automatic test generator for Java.

However, when running Randoop from the commandline, I can't seem to figure out how to properly specify the classpath.

I read through this question: Java Classpath error-cannot find my class in detail but my setup seems a bit different.

I'm running on a Windows machine.

The overall project structure looks like this:

cse331/
    bin/
        hw5/
            GraphNode.class
    src/
        hw5/
            GraphNode.java
    randoop-2.1.4.jar

(There are some other files but the not important here, I think)

I tried calling:

java -ea -classpath randoop-2.1.4.jar:bin/* randoop.main.Main gentests --testclass=GraphNode --timelimit=20

But received the error:

Error: Could not find or load main class randoop.main.Main

I've tried several variations, loading in the .java file instead of the .class file for the classpath but no option has worked so far. If I don't specify the class path at the end of randoop-2.1.4, I get an error message saying the class GraphNode cannot be found.

The setup is just the first step and I can't seem to get on the right track.

Stroop test in python not working properly.

This is homework: trying to create a Stroop test in python. I have written most of the code already, but i'm having trouble making the matching stimuli randomly switch between different and same stimuli when I hit the 'next' button.

Here's my code so far:

# Button, Label, Frame
from Tkinter import *
import random

def stimulus(same):
    colors = ['red', 'blue', 'green', 'yellow', 'orange', 'purple']

    word = random.choice(colors)
    if same == True:
        return (word, word)
    else:
        colors.remove(word)
        color = random.choice(colors)
        return (word, color)

# create label using stimulus
s = stimulus(same=True)

word, color = stimulus(True)
root = Tk()
label = Label(root, text=word, fg=color)
label.pack()

#create the window
def quit():
    root.destroy()
closebutton = Button(root, text = 'close', command=quit)
closebutton.pack(padx=50, pady=50)

def next():
    word, color = stimulus(True)
    label.congig(text=word, fg=color)
    label.update()

nextbutton = Button(root, text='next', comand=next)
nextbutton.pack()

root.mainloop()

What does the assert_template mean in Micheal Harl's ruby on rails listing 8.7

Hi I'm following the Micheal Harl's ruby on rails tutorial and when I reach chapter 6, things get a bit confusing to me. Now I'm a bit confused about the test codes in listing 8.7 below:

require 'test_helper'

class UsersLoginTest < ActionDispatch::IntegrationTest

  test 'login with invalid information' do
    get login_path
    assert_template 'sessions/new'
    post login_path, session: { email: "", password: "" }
    assert_template 'sessions/new'
    assert_not flash.empty?
    get root_path
    assert flash.empty?
  end
end

The codes above is to catch unwanted flash error message persistence issue. but I'm not so sure where does the assert_template come from...If I can get some explanation about all lines of codes above, that'd be very much appreciated!

Thank you very much and I'm looking forward to your answers.

Trouble With Tester Class

I recently created a Deck class and I am trying to create a tester method to see if my class works. Here is my Deck Class:

import java.util.List;
import java.util.ArrayList;

/**
 * The Deck class represents a shuffled deck of cards.
 * It provides several operations including
 *      initialize, shuffle, deal, and check if empty.
*/
public class Deck 
{
/**
 * cards contains all the cards in the deck.
 */
public static List<Card> cards;

/**
 * size is the number of not-yet-dealt cards.
 * Cards are dealt from the top (highest index) down.
 * The next card to be dealt is at size - 1.
 */
private int size;
public static Card cardOne;

/**
 * Creates a new <code>Deck</code> instance.<BR>
 * It pairs each element of ranks with each element of suits,
 * and produces one of the corresponding card.
 * @param ranks is an array containing all of the card ranks.
 * @param suits is an array containing all of the card suits.
 * @param values is an array containing all of the card point values.
 */
public Deck(String[] ranks, String[] suits, int[] values) 
{
    for(int i=0; i<13;i++)
    {
        suits[i] = "Heart";
        ranks[i] = cardOne.rank();
        values[i] = cardOne.pointValue();
        cards.add(cardOne);
    }
    for(int i=0; i<13;i++)
    {
        suits[i] = "Spade";
        ranks[i] = cardOne.rank();
        values[i] = cardOne.pointValue();
        cards.add(cardOne);
    }
    for(int i=0; i<13;i++)
    {
        suits[i] = "Club";
        ranks[i] = cardOne.rank();
        values[i] = cardOne.pointValue();
        cards.add(cardOne);
    }
    for(int i=0; i<13;i++)
    {
        suits[i] = "Diamond";
        ranks[i] = cardOne.rank();
        values[i] = cardOne.pointValue();
        cards.add(cardOne);
    }

}


/**
 * Determines if this deck is empty (no undealt cards).
 * @return true if this deck is empty, false otherwise.
 */
public static boolean isEmpty() 
{
    if(cards.size()==0)
        return true;
    else
        return false;
}

/**
 * Accesses the number of undealt cards in this deck.
 * @return the number of undealt cards in this deck.
 */
public static int size() 
{
    return  cards.size();
}

/**
 * Randomly permute the given collection of cards
 * and reset the size to represent the entire deck.
 */
public static List<Card> Shuffled[];
public void shuffle() 
{
    for(int i=0; i<52; i++)
    {
        cards.get(i);

        int k=(int)(Math.random()*100);
        while(k >52 || k<0)
        {
            k=(int)(Math.random()*100);
        }
        if(Shuffled[k]==null)
            Shuffled[k]=(List<Card>) cards.get(i);
    }

}

/**
 * Deals a card from this deck.
 * @return the card just dealt, or null if all the cards have been
 *         previously dealt.
 */
public Card deal() 
{
    int cardDealed= (int)(Math.random()*100);
    while(cardDealed >52 || cardDealed<0)
    {
        cardDealed=(int)(Math.random()*100);
    }
    Shuffled[cardDealed].remove(cardDealed);

    return (Card) Shuffled[cardDealed];
}

/**
 * Generates and returns a string representation of this deck.
 * @return a string representation of this deck.
 */
@Override
public String toString() 
{
    String rtn = "size = " + size + "\nUndealt cards: \n";

    for (int k = size - 1; k >= 0; k--) {
        rtn = rtn + cards.get(k);
        if (k != 0) {
            rtn = rtn + ", ";
        }
        if ((size - k) % 2 == 0) {
            // Insert carriage returns so entire deck is visible on console.
            rtn = rtn + "\n";
        }
    }

    rtn = rtn + "\nDealt cards: \n";
    for (int k = cards.size() - 1; k >= size; k--) {
        rtn = rtn + cards.get(k);
        if (k != size) {
            rtn = rtn + ", ";
        }
        if ((k - cards.size()) % 2 == 0) {
            // Insert carriage returns so entire deck is visible on console.
            rtn = rtn + "\n";
        }
    }

    rtn = rtn + "\n";
    return rtn;
   }
}

I am having trouble figuring out how to put together a tester class. So far I have the following:

import java.util.List;

/**
* This is a class that tests the Deck class.
*/
public class DeckTester extends Deck
{
   public static List<Card> cards;

   public DeckTester(String[] ranks, String[] suits, int[] values) 
   {
        super(ranks, suits, values);
   }

  /**
  * The main method in this class checks the Deck operations for    consistency.
  * @param args is not used.
  */
  public static void main(String[] args) 
  {

  }
}

I also have a fully functioning Cards class. I'm just not sure how to check the Deck class.

Test file existence in Bash

I want to test existence of a file using a Bash script of but get "no file exists..." message even if the file ACTUALLY exists:

#!/bin/bash
# Usage : myscript.sh AAAAMMJJ
# where AAAAMMJJ is the argument passed to the script  

d_date=$1

# No accurate content here...
# $d_date value is 20160708
# $d_year value is 2016
# $d_month value is 07
# $d_day value is 08

# Directory path
p_path="/home/user/mydir/${year}"

# Filename is something like: my-file_20160708z.html
f_file="${$p_path}/my-file_${d_date}z.html"

# Testing if file exists
[ ! -f "${f_file}" ] && echo "File OK" || echo "no file..."

What is the proper way to test file existence with this kind of construct? This works perfectly with another file (".txt" file).

is it possible to have .class and .java classes inside the same source code package?

while going throw some open source programs source code, I realized that some projects have .class and .java inside thier source code files! I understand that .class are is already compiled .java file, and that .class is binary but is it possible to have compiled and uncompiled files in a project? if yes, then why to do that? what are the benefits?

long story short: am trying to study test classes in different projects, where I realized that some programs have test files under the (build) folder, where they have .class as extension! Do these classes differ in behave than test classes located under test package with regular .java extenstion? is there anyway to decompile them?

thanks

how to manually calculate code coverage percentage for path coverage?

I am manually creating the white-box testing for our system and I have issues with automated coverage testing tools. This is a Java-based system.

Path coverage % = (Total paths exercised / total number of paths in program) * 100

I was able to determine the total paths exercised but I don't know how to get the total number of paths in the program.

Would anyone be able to help me with this?

samedi 26 mars 2016

Controller testing with js.erb format

Iam using Rails 4.2.5, Ive to write controller test for destroy action, but iam using ajax call to destroy and using destroy.js.erb file. Please help me to solve the following issue to pass the test when it calls js format, Iam pasting error below.

def destroy
@status = @song.destroy
respond_to do |format|
format.js
end
end

SongsControllerTest#test_should_destroy_song:
ActionController::UnknownFormat: ActionController::UnknownFormat
app/controllers/songs_controller.rb:36:in `destroy'

Support for exceptions testing in the core library?

I'm aware of the non-core Test::Exception module, but I'd like avoid a non-core dependency for something as basic as exceptions testing. On the other hand, re-implementing this functionality in order to avoid the non-core dependency is even worse.

is there some other support for exceptions testing among the core modules?

Runtime Error (NZEC) for TEST (SPOJ) using Python 3.5

The question (TEST) on SPOJ.

This would be my solution to the question

Please do help me out, thank you!

vendredi 25 mars 2016

How to unit test two objects of different classes which has same data?

I have two objects which has same fields. I want to compare the values of those objects fields. How to do that using Junit?

public class DeviceDTO {

private String id;

And

public class DeviceData {
private String id;

I want to compare those object's field values.

c# unittest list as inparam with no return

I am trying to unittest the code under, without any success. As you can see,

_dataObjectService.ParseDataObjects(listOfObjects, dataObjects, objectMode.New);

a list (listOfObjects) is used as an inparameter. The list is than set with values inside the method, (ParseDataObjects).

Tried to mock the service like this:

dataObjectServiceMock.Setup(m => m.ParseDataObjects(It.IsAny<Dictionary<ObjectMode,List<ObjectItem>>>(), _listUpdateObject, objectMode.New));


private Dictionary<ObjectMode, List<ObjectItem>> Test(Object object, Foo foo)
{
    var listOfObjects = new Dictionary<ObjectMode, List<ObjectItem>>();

    var dataObjects = _dataReader.GetDataObjects(foo.getObjects);
    var dataObjectItems = _dataReader.GetDataObjectItems(foo.getObjects);

    _dataObjectService.ParseDataObjects(listOfObjects, dataObjects, objectMode.New);
    _dataObjectService.ParseDataObjects(listOfObjects, dataObjectItems, objectMode.New);

    return listOfObjects ;
}

When I try to mock this I can mock _dataObjectService and the Test method, but I have no idea how to mock that list. listOfObjects is always empty.

How can I mock it?