jeudi 31 décembre 2015

SyntaxError expecting end-of-input

Hi i am keep getting this message and i do not see anything wrong , can anybody help me? this is my code

ERROR["test_account_activation", UserMailerTest, 0.5950376749970019]
 test_account_activation#UserMailerTest (0.60s)
SyntaxError:         SyntaxError: /home/ubuntu/workspace/sample_app/app/mailers/user_mailer.rb:23: syntax error, unexpected keyword_end, expecting end-of-input
            test/mailers/user_mailer_test.rb:6:in `block in <class:UserMailerTest>'

ERROR["test_password_reset", UserMailerTest, 0.6068314979784191]
 test_password_reset#UserMailerTest (0.61s)
SyntaxError:         SyntaxError: /home/ubuntu/workspace/sample_app/app/mailers/user_mailer.rb:23: syntax error, unexpected keyword_end, expecting end-of-input
            test/mailers/user_mailer_test.rb:17:in `block in <class:UserMailerTest>'

user_mailer_test.rb

require 'test_helper'
class UserMailerTest < ActionMailer::TestCase
  test "account_activation" do
    user = users(:michael)
    user.activation_token = User.new_token
    mail = UserMailer.account_activation(user)
    assert_equal "Account activation", mail.subject
    assert_equal [user.email], mail.to
    assert_equal ["noreply@example.com"], mail.from
    assert_match user.name,               mail.body.encoded
    assert_match user.activation_token,   mail.body.encoded
    assert_match CGI::escape(user.email), mail.body.encoded
  end
  test "password_reset" do
    user = users(:michael)
    user.reset_token = User.new_token
    mail = UserMailer.password_reset(user)
    assert_equal "Password reset", mail.subject
    assert_equal [user.email], mail.to
    assert_equal ["noreply@example.com"], mail.from
    assert_match user.reset_token,        mail.body.encoded
    assert_match CGI::escape(user.email), mail.body.encoded
  end
end

I do not see the problem if think everything is ok!Thanks :)

How to do Automated test for Firstname and Lastname TextBoxes by using selenium , UnitTest,Chrome driver in MVC 5 c#

I designed 2 textboxes with MVC and c# and I have seen some test's online but not getting how to test my MVC application and how to navigate the my Textboxes inside test code driver.Navigate().GoToUrl();

Thanks ,

Parameterize Spock setup

Is it possible to parameterize a Spock setup?

By that I mean, imagine I have an object whose state I want to test. The object can have multiple states, but to simplify things, let's say there's one I'm particularly interested in, S1.

There are multiple ways to get the object to S1. I'm testing state, so all the tests for S1 will be the same regardless of how the object reached S1. The one thing that would differ between test cases would be the setup strategy.

One way to deal with this is to have a base test case (or "spec" to use Spock parlance) and subclasses that only supply different setup strategies.

But, given the nice data-driven features of tests that Spock offers, I got to wondering if there might be some way to parameterize the setup in such a way that I wouldn't need concrete subclass specs.

In effect, I would be saying, here's a spec, now run it with these different ways of executing setup.

How should I deal with external dependencies in my functions when writing unit tests?

The following function iterates through the names of directories in the file system, and if they are not in there already, adds these names as records to a database table. (Please note this question applies to most languages).

def find_new_dirs():
    dirs_listed_in_db = get_dirs_in_db()

    new_dirs = []
    for dir in get_directories_in_our_path():
        if dir not in dirs_listed_in_db:
            new_dirs.append(dir)

    return new_dirs

I want to write a unit test for this function. However, the function has a dependency on an external component - a database. So how should I write this test?

I assume I should 'mock out' the database. Does this mean I should take the function get_dirs_in_db as a parameter, like so?

def find_new_dirs(get_dirs_in_db):
    dirs_listed_in_db = get_dirs_in_db()

    new_dirs = []
    for dir in get_directories_in_our_path():
        if dir not in dirs_listed_in_db:
            new_dirs.append(dir)

    return new_dirs

Or possibly like so?

def find_new_dirs(db):
    dirs_listed_in_db = db.get_dirs()

    new_dirs = []
    for dir in get_directories_in_our_path():
        if dir not in dirs_listed_in_db:
            new_dirs.append(dir)

    return new_dirs

Or should I take a different approach?

Also, should I design my whole project this way from the start? Or should I refactor them to this design when the need arises when writing tests?

Chapter 10 ruby on rails Expected nil to not be nil fail

Well i am having some problems with the tests, They are constantly failing and i guess i have an idea, but since you guys are more experienced i am asking for help!

This is the error :

DEPRECATION WARNING: You attempted to assign a value which is not explicitly `true` or `false` to a boolean column. Currently this value casts to `false`. This will change to match Ruby's semantics, and will cast to `true` in Rails 5. If you would like to maintain the current behavior, you should explicitly handle the values you would like cast to `false`. (called from create at /home/ubuntu/workspace/sample_app/app/controllers/sessions_controller.rb:9)
DEPRECATION WARNING: You attempted to assign a value which is not explicitly `true` or `false` to a boolean column. Currently this value casts to `false`. This will change to match Ruby's semantics, and will cast to `true` in Rails 5. If you would like to maintain the current behavior, you should explicitly handle the values you would like cast to `false`. (called from create at /home/ubuntu/workspace/sample_app/app/controllers/sessions_controller.rb:9)
 FAIL["test_login_with_valid_information_followed_by_logout", UsersLoginTest, 1.5503864830825478]
 test_login_with_valid_information_followed_by_logout#UsersLoginTest (1.55s)
        Failed assertion, no message given.
        test/integration/users_login_test.rb:22:in `block in <class:UsersLoginTest>'

and this is the second error i get

DEPRECATION WARNING: You attempted to assign a value which is not explicitly `true` or `false` to a boolean column. Currently this value casts to `false`. This will change to match Ruby's semantics, and will cast to `true` in Rails 5. If you would like to maintain the current behavior, you should explicitly handle the values you would like cast to `false`. (called from create at /home/ubuntu/workspace/sample_app/app/controllers/sessions_controller.rb:9)
 FAIL["test_login_with_remembering", UsersLoginTest, 0.4226432810537517]
 test_login_with_remembering#UsersLoginTest (0.42s)
        Expected nil to not be nil.
        test/integration/users_login_test.rb:42:in `block in <class:UsersLoginTest>'

This is my test/integration/users_login_test.rb

require 'test_helper'

class UsersLoginTest < ActionDispatch::IntegrationTest

    def setup
        @user = users(:michael)
    end

    test "login with invalid information" do
    get login_path
    assert_template 'sessions/new'
    post login_path, session: { email: "", password: "" }
    assert_template 'sessions/new'
    assert_not flash.empty?
    get root_path
    assert flash.empty?
    end 

  test "login with valid information followed by logout" do
    get login_path
    post login_path, session: { email: @user.email, password: 'password' }
    assert is_logged_in?
    assert_redirected_to @user
    follow_redirect!
    assert_template 'users/show'
    assert_select "a[href=?]", login_path, count: 0
    assert_select "a[href=?]", logout_path
    assert_select "a[href=?]", user_path(@user)
    delete logout_path
    assert_not is_logged_in?
    assert_redirected_to root_url
    #simula al usuario saliendo de la sesion en otra ventana
    delete logout_path
    follow_redirect!
    assert_select "a[href=?]", login_path
    assert_select "a[href=?]", logout_path,      count: 0
    assert_select "a[href=?]", user_path(@user), count: 0
    end  

    test "login with remembering" do
            log_in_as(@user, remember_me: '1')
            assert_not_nil cookies['remember_token']
    end
    test "login without remembering" do
            log_in_as(@user, remember_me: '0')
            assert_nil cookies['remember_token']
    end
end

and this if is neccesary session controller.rb

class SessionsController < ApplicationController
  def new
  end

  def create
    user = User.find_by(email: params[:session][:email].downcase)
    if user && user.authenticate(params[:session][:password])
      # loguea al usuario y redirecciona a la pagina del usuario
      if user.activated?
      log_in user
      params[:session][:remember_me] == '1'? remember(user) : forget(user)
      redirect_back_or user
      else
      message = "Cuenta no activada"
      message += "Por favor y no te lo repito mas anda a tu mail para activar tu cuenta forro"
      flash[:warning] = message
      redirect_to root_url
      end
    else
      # Crea un mensaje de error
      flash.now[:danger] = "email/contraseña incorrectos"
     render 'new'
    end  
  end

  def destroy
    log_out if logged_in?
    redirect_to root_url
  end
end

what i am doing wrong? thanks in advance!

Testing two forms on one page the `press` directive always submits the second form.

I am using Laravel 5.1 and whenever I test a page with two forms the second form is always submitted. If I remove the second form, or swap the order of the forms, the test works (but other tests then break). The page behaves as expected in the browser. Any help is appreciated.

edit.blade.php

    @section('content')
    <!-- Update Form -->
            {!! Form::model($article,
                    [
                        'id'=>'editForm', 
                        'method' => 'PATCH', 
                        'action' => ['ArticlesController@update', $article->id] 
                    ]) !!}

    <!-- Title Field -->
        {!! Form::label('title', 'Title:') !!}
        {!! Form::text('title', null, ['id' => 'title']) !!}

    <!-- Content Field -->
        {!! Form::label('content', 'Content:') !!}
        {!! Form::textarea('content', null, ['id' => 'content']) !!}

    <!-- Save Button -->
        {!! Form::submit('Save', ['id' => 'save']) !!}

        {!! Form::close() !!}

    <!-- Delete Form -->
        {!! Form::model($article,
                [
                    'id'=>'deleteForm', 
                    'method' => 'DELETE', 
                    'action' => ['ArticlesController@destroy', $article->id] 
                ]) !!}

    <!-- Delete Button -->
        {!! Form::submit('Delete', ['id' => 'delete']) !!}

        {!! Form::close() !!}
    @stop

ArticlesTest.php

 /**
  * @group articles
  * @test
  */
public function it_edits_an_article()
{
    //$this->markTestSkipped('Test doesn\'t work with two forms on one page.');

    $article1 = factory(Figurosity\Articles\Article::class)->make();
    $article2 = factory(Figurosity\Articles\Article::class)->make();

    $user = factory(User::class)->create();

    $user->articles()->save($article1);
    $this->seeInDatabase('articles', 
        [
            'title' => $article1->title,
            'user_id' => $user->id
        ]);

    $this->actingAs($user)
        ->visit($this->articlesUrl)
        ->see($article1->title)
        ->visit('/articles/'.$article1->slug.'/edit')
        ->type($article2->title, '#title')
        ->type($article2->content, '#content')
        ->press('Save') //this presses the delete button
        ->seePageIs($this->articlesUrl)
        ->see($article2->title)
        ->visit('/whats-new/'.$article1->slug)
        ->see($article2->content)
        ->visit('/latest/'.$article2->slug)
        ->see($article2->content)
        ->seeInDatabase('articles', ['title' => $article2->title]);
}

Behavior check of Web Appllication for multiple requests

I have a web application running of Tomcat server. I want to test the behavior of the code that I have written for the simultaneous requests hitting the server. Say, I have two functionalities A and B in my web application. Now, I have to test the behavior of the code for multiple simultaneous requests for A and similarly for B.(Multiple requests from one user or multiple users) I have googled about how to test webApp for simultaneous but could not find any concrete. Could someone please tell how can I achieve this?

Getting started with Manual Testing

Hi I know that sound silly please do not burn me for this, I am a fresher to learn software testing. Can anyone suggest ways of getting started with Manual Testing & how much time it takes to be proficient in manual testing skill set.

React testing component unmount

Let's say I have the following scenario to be tested.

After firing close button I want to test if the component is unmounted. So I have:

component = ReactUtils.renderIntoDocument(....);
closeButton = ReactUtils.findRenderedDOMComponentWithClass(component, 'Close-Button');

ReactUtils.Simulate.click(closeButton);

//Assert if component is mounted?

mercredi 30 décembre 2015

Running unit tests when Volley is being used as a submodule

I'm writing my first unit tests for my application, and when I switch the build variant to Unit Tests, Volley's tests blows up and says that Junit isn't imported.. I could import it in the app module build.gradle, but I don't know how to do that to resolve the errors for Volley? I don't care about the tests for Volley, just my own. How can I fix it?

Thank you!

Can't get JUnit tests to fail in Android Studio

I'm trying out Android development, but haven't come too far because I'm unable to get a test case to fail.

I have the following test case in the androidTest folder:

package com.example.aaronf.myapplication;

import android.test.*;

public class ToDoListTest extends AndroidTestCase {

    private void newToDoListHasNoItems() {
        assertEquals(new ToDoList().length, 0);
    }

    private void addingToDoGivesLengthOfOne() {
        ToDoList toDoList = new ToDoList();
        toDoList.add(new ToDo());
        assertEquals(toDoList.length, 1);
    }

    public void runTests() {
        newToDoListHasNoItems();
        addingToDoGivesLengthOfOne();
    }

    public ToDoListTest() {
        super();
        runTests();
    }
}

The ToDoList class looks like:

package com.example.aaronf.myapplication;

public class ToDoList {
    public int length = 0;

    public void add(ToDo toDo) {

    }
}

It seems like it should fail on addingToDoGivesLengthOfOne(), but I get a green bar.

How do I test a model in django?

More specifically, I am following a course and the only import so far is TestCase from django.test. We are given a coding challenge related to testing of a model. So, this is very specific. I'm not sure how to accomplish what is being asked. Let me show what is asked first: "Now add a test that creates an instance of the Writer model and, using self.assertIn, make sure the email attribute is in the output of the mailto() method."

So, the model in question has this:

class Writer(models.Model):
    name = models.CharField(max_length=255)
    email = models.EmailField()
    bio = models.TextField()

    def __str__(self):
        return self.name

    def mailto(self):
        return '{} <{}>'.format(self.name, self.email)

What I did to test this, is (all in one file but in steps here):

from .models import Writer
'''that worked fine...'''

Then we have the following (My task part was to fill in everything from def test_writer_creation(self) on):

class WriterModelTestCase(TestCase):
'''Tests for the Writer model'''
    def test_writer_creation(self):
        writer = Writer.objects.create(
            name = "Bruce Whealton",
            email = "bruce@example.com",
            bio = "Here is a short bio about bruce for testing purposes."
         )
  self.assertIn('bruce@example.com', writer.mailto())

The writer instance should be ok, it seems straightforward. The mailto function should just return a string. So, in this file, articles/tests.py should I not expect the email value to be in the string output from writer.mailto()?
What am I doing wrong?

Thanks, Bruce

TestComplete - How can I get the array elements that are stored in their 'var' object when using C#?

I'm trying to write simple automation test using TestComplete in C#. (Not JScript/C# Script, just C#)

I'm using their libraries as you can see here: http://ift.tt/1TpWbOG

and specifically their 'var' type: http://ift.tt/1SmNiaq

I'm trying to identify all the elements on the screen according to specific key and value, using the method "FindAll" (http://ift.tt/1TpWaKL)

var a = someProcess["FindAll"]("text", "Simulate", 200, false);

In debug mode I can see that "a" has two encapsulated elements that he found and this line passes successfully.

The problem: I'm trying to get the first element, using the line

var b = a["0"];

and get a 'MissingMethodException'.

If I try to use

var b = a(0);

it says I'm trying to use variable as a function.

I couldn't find any method that can help me to get the elements.

Please help

Thanks a lot!

Acceptance Test - Comparing Content between live and development site

I'm working on refactoring legacy code and I'd like to create a test that simply compares the live site content to the development site to make sure the output is identical. Challenges include:

  • No control over databases and so is the content
  • Ajax calls on website to load content (CURL or WGET won't work)
  • All data is behind a login

I've looked at PHP Codeception and Selenium but can't seem to figure out how I can make a real-time comparison.

Can any one suggest a way for doing this wihthout manually checking each page?

salesforce test class default email to case

In Salesforce Apex it is possible to code a custom email handler by implementing the Messaging.InboundEmailHandler interface. Obviously it is possible to code a test class that mocks inbound emails and test-fires its methods. As usual all data created during the execution of test methods is rolled back after the test completes. Good!

However, when using the standard email-to-case settings there is no Apex code and therefore no test class can be coded to cover that functionality. However I still would like to have mock inbound emails to test the standard email-to-case functionality in a various test scenarios, without committing to the database.

How would I go about that?

Android AutoCompleteTextView example show error when call Google API

"error_message": "This IP, site or mobile application is not authorized to use this API key. Request received from IP address 202.131.115.45, with empty referer" "predictions": [0] "status": "REQUEST_DENIED"

Good book to learn Design patterns and its implementation in terms of automation testing frameworks?

Being a automation developer I would like to know if there is any book or blog etc which can help me to learn about design patterns and how they should be implemented while writing automation testing frameworks .

testOptions setting is ignored when redefining the test task

I have a subproject named servision with 2 tests configs:

  • ti
  • test

Here is my project config:

val beforeTest := taskKey[Unit]("Before test")

val ti = config("ti") extend (Test)
lazy val servision = project
    .configs(regresion, ti)
    .settings(
        inConfig(ti)(Defaults.testTasks) ++
            Seq(
                testOptions in test := Seq(Tests.Filter(!_.startsWith("ti.")})),
                testOptions in ti := Seq(Tests.Filter(_.startsWith("ti.")})),
                test in ti := {
                   Def.sequential(beforeTest, test in ti).value
                   afterTest()
                }
            )
        )

I've discovered, that if I redefine ti:test, then ti:testOptions are fully ignored, and test:testOptions are used instead.

If I comment out the redefinition, then ti:testOptions are used.

I think this is rather a bug in SBT, that a misconfiguration.

mardi 29 décembre 2015

How to perform an integration test with the Opentok API?

We're using the OpenTok API for P2P video and would like to automate testing - ideally using capybara/cucumber.

Is there a command to observe a publisher/subscriber div to see if its publishing/receiving video?

Spring Test MockMvc perform request on external URL

I'm trying to perform a POST request on URL outside the current context, and it looks like Spring cannot understand it.

Test code:

        String content = mvc
            .perform(post("http://ift.tt/1R6P7HB)
                    .header("Authorization", authorization)
                    .contentType(MediaType.APPLICATION_FORM_URLENCODED)
                    .param("username", username)
                    .param("password", password)
            .andExpect(status().isOk())
            .andReturn().getResponse().getContentAsString();

Works like a charm on current context. But remote service cannot be reached at all. It seems like "http://ift.tt/1R6P7HD" part is ignored.

Mock creation code:

mvc = MockMvcBuilders.webAppContextSetup(context).build();

Is there any way to make it work? Because using standard Java HttpUrlConnection class is a huge pain.

Spring boot integrationTest web configuration

Trying implement integration test for httpClient. HttpClient can use stub rest controller service to send data for "other web".

Configuration is:

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = {
     SpringMvcApplicationConfiguration.class
    , StorageConfiguration.class
    , …
})
@WebIntegrationTest({"server.port=8080", "management.port=0"})
@Transactional
public class HttpSendHelperTest {
  private final static Logger LOGGER =     LoggerFactory.getLogger(HttpSendHelperTest.class);

  @Inject
  private HttpSendHelper httpSendHelper;
  @Inject
  private RequestMappingHandlerMapping mapping;
…
}

Test starts correctly loading all my configured contests. Checking mapping.getHandlerMethods() - all uris present in map.

Sending test request to default url - response status is 200, but sending to others (trying some from mapped uris) - response status is 404.

RestTemplate restTemplate = new TestRestTemplate();


restTemplate.postForEntity("http://localhost:8080/",
    StubBuilder.getInspection(), String.class)

responseEntity = restTemplate.postForEntity("http://localhost:8080/stub/send",
    StubBuilder.getInspection(), String.class);

Please suggest how to solve the problem.

Use Guice to create components to use with ThreadWeaver

The application I have been working on has been getting more and more complicated, and it's gotten to the point where I have been running into the same problems over and over again with concurrency. It no longer made any sense to solve the same problems and not have any regression tests.

That's when I found ThreadWeaver. It was really nice for some simple concurrency cases I cooked up, but I started to get frustrated when trying to do some more complicated cases with my production code. Specifically, when injecting components using Guice.

I've had a bit of a hard time understanding the implications of the way ThreadWeaver runs tests, and looked for any mention of Guice or DI in the wiki documents, but with no luck.

Is Guice compatible with ThreadWeaver?

Here is my test

@Test
public void concurrency_test() {
    AnnotatedTestRunner runner = new AnnotatedTestRunner();
    runner.runTests(OPYLWeaverImpl.class, OPYLSurrogateTranscodingService.class);
}

Here is my test implementation

public class OPYLWeaverImpl extends WeaverFixtureBase {

@Inject private TaskExecutor                 taskExecutor;
@Inject private Serializer                   serializer;
@Inject private CountingObjectFileMarshaller liveFileMarshaller;
@Inject private GraphModel                   graphModel;
@Inject private CountingModelUpdaterService  updaterService;
@Inject private BabelCompiler                babelCompiler;
@Inject private EventBus                     eventBus;

OPYLSurrogateTranscodingService service;

private Path testPath;

@ThreadedBefore
public void before() {
    service = new OPYLSurrogateTranscodingService(eventBus, taskExecutor, serializer, liveFileMarshaller,
            () -> new OPYLSurrogateTranscodingService.Importer(graphModel, babelCompiler, updaterService, eventBus),
            () -> new OPYLSurrogateTranscodingService.Validator(eventBus, babelCompiler),
            () -> new OPYLSurrogateTranscodingService.Exporter(graphModel, updaterService));
}

@ThreadedMain
public void mainThread() {
    testPath = FilePathOf.OASIS.resolve("Samples/fake-powershell-unit-test.opyl");
    service.applyToExistingGraphModel(testPath);
}

@ThreadedSecondary
public void secondaryThread() {

}

@ThreadedAfter
public void after() {

}

And the WeaverFixtureBase

public class WeaverFixtureBase {
@Inject protected CountingEventBus eventBus;

@Before public final void setupComponents() {
    Injector injector = Guice.createInjector(new WeaverTestingEnvironmentModule(CommonSerializationBootstrapper.class));
    injector.getMembersInjector((Class) this.getClass()).injectMembers(this);
}
private class WeaverTestingEnvironmentModule extends AbstractModule {

    private final Class<? extends SerializationBootstrapper> serializationBootstrapper;

    public WeaverTestingEnvironmentModule(Class<? extends SerializationBootstrapper> serializationConfiguration) {
        serializationBootstrapper = serializationConfiguration;
    }

    @Override protected void configure() {
        bind(TaskExecutor.class).to(FakeSerialTaskExecutor.class);
        bind(SerializationBootstrapper.class).to(serializationBootstrapper);
        bind(ModelUpdaterService.class).toInstance(new CountingModelUpdaterService());
        bindFactory(StaticSerializationConfiguration.Factory.class);

        CountingEventBus localEventBus = new CountingEventBus();

        bind(Key.get(EventBus.class, Bindings.GlobalEventBus.class)).toInstance(localEventBus);
        bind(Key.get(EventBus.class, Bindings.LocalEventBus.class)).toInstance(localEventBus);
        bind(CountingEventBus.class).toInstance(localEventBus);
        bind(EventBus.class).toInstance(localEventBus);

    }
    @Provides
    @Singleton
    public GraphModel getGraphModel(EventBus eventBus, Serializer serializer) {
        return MockitoUtilities.createMockAsInterceptorTo(new GraphModel(eventBus, serializer));
    }
}

But when the classloader loads OPYLWeaverImpl, none of the Guice stuff goes off and I get a big pile of nulls.

I feel like this is one of those "missing-something-really-simple" kind of scenarios. Sorry if it is!

Jenkins & Git - Which is the most appropiate trigger to run tests?

I'm developing plugins for a major system (Moodle), so Continuous Integration will be useful for me.

The idea is to checkout Moodle stable version branches I want to publish the plugin for, to run the test with these versions.

But as I've never worked with Jenkins nor Continuous Integration, I've no clear when would be the best moment to trigger the build that run the tests. These are the build triggers Jenkins offers:

  • Trigger builds remotely (e.g., from scripts)
  • Build after other projects are built
  • Build periodically (cron-like; I don't think would be suitable)
  • Build when a change is pushed to GitHub (could be)
  • Poll SCM (can't see difference with the periodic build)

Apart from these, we have the Git hooks, which at first sight I find them more interesting that what is above.

  • Pre/post commit
  • Pre/post merge (could be nice for triggering builds only for certain branches)
  • Pre/post push

Note: Git plugin for Jenkins fails always when fetching Moodle repo, it seems because its quite long size (I don't know if Git plugin es necessary/important for this approach).

how to test install referrer from youtube video ad from adwords

I have added google adwords sdk for tracking conversions from youtube video ad, I have added AdwordsConversionTracking with conversionID and Label in the code. But I am unable to test it.

Is there anyway that I can test it before publishing app? How do I test postback Url through a youtube ad before publishing through android play store?

Spring boot test profiling

I have for example 3 developers : DEV1, DEV2 and DEV3. I want every one has their own application-DEV1(2)(3)-properties in /test/resources/ folder.

I have an class

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = MyApplication.class) 

I don't want to use @ActiveProfiles annotation on class because than every time you want to run tests every user has to added value to load their own configurations. I am using IntelliJ so I set Maven run configuration with command

clean test

and profile DEV1 eg.

When I run test result is next: On the start up of test running output I can see:

/usr/lib/jvm/java-8-oracle/bin/java -Dspring.profiles.active=DEV1

But it comes to concrete test class output is :

2015-12-29 12:52:10.129  INFO 17211 --- [           main] MyClassTest  : Starting MyClassTest on dev with PID 17211 
2015-12-29 12:52:10.130  INFO 17211 --- [           main] MyClassTest  : No profiles are active

What I am missing here?

lundi 28 décembre 2015

is this a bug on the bash 'if - then - elif - then - else - fi' test conditions or what?

I've read in online articles that non-designated variables or variables set to null, like VAR= and nothing after it, can still be treated as being real. All you need to do is put the $ in font, making it $VAR.

But it seems there is an exception to this rule, and it involves the testing of a condition using paired '[]' or '[[]]'. The errors reported are bit of any help either. You get and error that reads ' [[: not found ' and what does that really tell you? By a steady process of elimination, and lots of on-line checking to make sure my code looked good, I worked it out:

    [ $a != $b ], [[ $a -ne $b ]],

with or without the 'if ... fi' structure will fail if either $a or $b evaluates by bash to be null. If it does, it is apparently removed as an argument. Say either $a and/or $b was non-declared or set to null. for the test conditions, what is passed appears like this:

    [ != $b ], [[ -ne $b ]]
    [ $a != ], [[ $a -ne ]]
    [ != ], [[ -ne ]] 

To deal with this, you can wrap either or both variables in a double-quote pair, so that what the rext condition is not a null, though it may evaluate to a null eventually in the next phase:

    [ "$a" != "$b" ], [[ "$a" -ne "$b" ]]

might become

     [ $a != $b ],  [[ $a -ne $b ]]

and

    [ "$a" != "$b" ], [[ "$a" -ne "$b" ]]

might become

    [ $a != $b ], [[ $a -ne $b ]] 

while

    [ "$a" != "$b" ], [[ "$a" -ne "$b" ]]

might become

    [ $a != $b ], [[ $a -ne $b ]] 

I say "might become", because this is not a sure thing. What if instead of $a and/or "$b", you were passing an empty string or space-filled string as a constant? Stripping off the outer double-quote pair,

    [ "$a" != "" ] would become [ $a != ]
    [[ "$a" -ne "" ]] would become [[ $a -ne ]]

if enclosed spaces, this is what you would see:

    [ "$a" != "     " ] would become [ $a !=      ]
    [[ "$a" -ne "     " ]] would become [[ $a -ne      ]]

which is effectively the same thing, because extra whitespaces don't count.

So, we have a problem with some cases of using double-quote pairs, and I don't know how to resolve the conflict that ensues. Avoiding the use of null or spaced constants in test conditions would be one way I suppose. but I've not read anything about limitations such as this.

Test the limit of my laptop using python script

I made a script to multiply matrix by itself, and i wanna (as the title discribes) the limite of my laptop.

How can i do that ?!

I'm using a function that takes the matrix and the number of multiplication of it by itself :

def matrix_mult(matrix, nbr): 

Golang test with channels does not exit

The following Golang test never exits. I suspect it has something to do with a channel deadlock but being a go-noob, I am not very certain.

const userName = "xxxxxxxxxxxx"

func TestSynchroninze(t *testing.T) {
    c, err := channel.New(github.ChannelName, authToken)
    if err != nil {
        t.Fatalf("Could not create channel: %s", err)
        return
    }

    state := channel.NewState(nil)
    ctx := context.Background()
    ctx = context.WithValue(ctx, "userId", userName)
    user := api.User{}

    output, errs := c.Synchronize(state, ctx)

    if err = <-errs; err != nil {
        t.Fatalf("Error performing synchronize: %s", err)
        return
    }

    for o := range output {
        switch oo := o.Data.(type) {
        case api.User:
            user = oo
            glog.Infof("we have a USER %s\n", user)
        default:
            t.Errorf("Encountered unexpected data type: %T", oo)
        }
    }
}

Here are the methods being tested.

type github struct {
    client *api.Client
}

func newImplementation(t auth.UserToken) implementation.Implementation {
    return &github{client: api.NewClient(t)}
}

// -------------------------------------------------------------------------------------

const (
    kLastUserFetch = "lastUserFetch"
)

type synchronizeFunc func(implementation.MutableState, chan *implementation.Output, context.Context) error

// -------------------------------------------------------------------------------------

    func (g *github) Synchronize(state implementation.MutableState, ctx context.Context) (<-chan *implementation.Output, <-chan error) {
        output := make(chan *implementation.Output)
        errors := make(chan error, 1) // buffer allows preflight errors

        // Close output channels once we're done
        defer func() {
            go func() {
                // wg.Wait()

                close(errors)
                close(output)
            }()
        }()

        err := g.fetchUser(state, output, ctx)
        if err != nil {
            errors <- err
        }

        return output, errors
    }

func (g *github) fetchUser(state implementation.MutableState, output chan *implementation.Output, ctx context.Context) error {
    var err error

    var user = api.User{}
    userId, _ := ctx.Value("userId").(string)
    user, err = g.client.GetUser(userId, ctx.Done())

    if err == nil {
        glog.Info("No error in fetchUser")
        output <- &implementation.Output{Data: user}
        state.SetTime(kLastUserFetch, time.Now())
    }

    return err
}

func (c *Client) GetUser(id string, quit <-chan struct{}) (user User, err error) {
    // Execute request
    var data []byte
    data, err = c.get("users/"+id, nil, quit)
    glog.Infof("USER DATA %s", data)

    // Parse response
    if err == nil && len(data) > 0 {
        err = json.Unmarshal(data, &user)

        data, _ = json.Marshal(user)
    }

    return
}

Here is what I see in the console (most of the user details removed)

I1228 13:25:05.291010   21313 client.go:177] GET http://ift.tt/1NLJ5JX
I1228 13:25:06.010085   21313 client.go:36] USER DATA {"login":"xxxxxxxx","id":00000000,"avatar_url":"http://ift.tt/1IzbGDD",...}
I1228 13:25:06.010357   21313 github.go:90] No error in fetchUser

==========EDIT=============

Here is the relevant portion of the api package.

package api

type Client struct {
    authToken auth.UserToken
    http      *http.Client
}

func NewClient(authToken auth.UserToken) *Client {
    return &Client{
        authToken: authToken,
        http:      auth.NewClient(authToken),
    }
}




// -------------------------------------------------------------------------------------
type User struct {
    Id             int    `json:"id,omitempty"`
    Username       string `json:"login,omitempty"`
    Email          string `json:"email,omitempty"`
    FullName       string `json:"name,omitempty"`
    ProfilePicture string `json:"avatar_url,omitempty"`
    Bio            string `json:"bio,omitempty"`
    Website        string `json:"blog,omitempty"`
    Company        string `json:"company,omitempty"`
}

And the channel package

package channel

type Channel struct {
    implementation.Descriptor
    imp implementation.Implementation
}

// New returns a channel implementation with a given name and auth token.
func New(name string, token auth.UserToken) (*Channel, error) {
    if desc, ok := implementation.Lookup(name); ok {
        if imp := implementation.New(name, token); imp != nil {
            return &Channel{Descriptor: desc, imp: imp}, nil
        }
    }

    return nil, ErrInvalidChannel
}

and the implementation package...

package implementation

import "http://ift.tt/1salFH3"

// -------------------------------------------------------------------------------------

// Implementation is the interface implemented by subpackages.
type Implementation interface {
    // Synchronize performs a synchronization using the given state. A context parameters
    // is provided to provide cancellation as well as implementation-specific behaviors.
    //
    // If a fatal error occurs (see package error definitions), the state can be discarded
    // to prevent the persistence of an invalid state.
    Synchronize(state MutableState, ctx context.Context) (<-chan *Output, <-chan error)

    // FetchDetails gets details for a given timeline item. Any changes to the TimelineItem
    // (including the Meta value) will be persisted.
    FetchDetails(item *TimelineItem, ctx context.Context) (interface{}, error)
}

too high Code Coverage in KarmaJS with karma-coverage & Jasmine

I'm using Jasmine as the testing framework for my AngularJS application. I run the tests with the help of Grunt & KarmaJS. KarmaJS also generates the code coverage with the help of karma-coverage.

Now I've created a model for configuaration data, which I also have to instantiate for other tests. Because of this instantiation I get a code coverage for this file although I haven't done any tests for it. Only because while the test run all of the lines were used, the coverage is 100%.

Now the question: Is there a way to specify in my tests which files they cover?

In PHP Unit there is an @covers annotation which specifies what code is covered with the test.

Thx

How to specify @RequestMapping params in MockMvc

I have a controller:

@Controller
@RequestMapping(value = "/bookForm")
public class BookFormController {

    @Autowired
    private BookHttpRequestParser parser;

    @Autowired
    private BooksService booksService;

    @RequestMapping(params = "add", method = RequestMethod.POST)
    public String addBook(HttpServletRequest request) {
        try {
            Book newBook = parser.createBookFromRequest(request);
            booksService.addBook(newBook);
        } catch (InvalidTypedParametersException e) {

        }
        return "redirect:index.html";
    }

This Controller has a method for adding book to DB. Method has @RequestMapping annotation with params ="add" value.

Im tring to set this params criteria to controller unit test method:

@Test
public void addBook() throws Exception{
    HttpServletRequest request = mock(HttpServletRequest.class);
    Book book = new Book();
    when(parser.createBookFromRequest(request)).thenReturn(book);

    mockMvc.perform(post("/bookForm", "add"))
    .andExpect(status().isOk())
    .andExpect(view().name("redirect:index.html"));
}

Where to specify this @ResuetsMapping params value?

This:

mockMvc.perform(post("/bookForm", "add"))

doesn't work at all.

Test explorer shows just a part of the tests in the solution

I have a solution with some project, test explorer shows all tests but doesn't show the tests only of my project. My project was build successfully.

Can somebody help me?

Thanks, P.B

Is it possible to test scroll in using Gemini by Yandex?

Does anyone use Gemini by Yandex for testing css regression?

I faced with the following problem: need to test scroll in some page, but as I know, gemini capture whole page and show only that part which you set by adding .setCaptureElements('someElement').

E.g. I set capture element as html (which has 100% height) and my content is very huge, but gemini screenshot show up only cut over part of page without possibility to scroll cause page hasn't scroll as such...

Maybe some of you faced with same problem and have cool solution? Thanks!

dimanche 27 décembre 2015

browser.getProcessedConfig in Protractor

Protractor exposes a getProcessedConfig() function on a global browser object. The documentation does not give enough information on when this function can be helpful:

Get the processed configuration object that is currently being run. This will contain the specs and capabilities properties of the current runner instance.

Set by the runner.

What use cases does getProcessedConfig() cover? Has someone used it before and why?

How to get the css selector of only a part of the element

I want to click only the first part of a element which has both a hyperlink(containing a checkbox) and non hyperlink part.In other word I want to check the checkbox.But I cannot separate only the checkbox part of the element.The selector when clicked always clicks the hyperlink part and takes me to a different page. I want to click only the part of the element which contains text I agree to the.I am using css selector like '.Grid>div:nth-child(9)>button:nth-child(1)' which chooses the entire selector or if I give [class="terms-conditions-input"]>label>:nth-child(1) it gives me the hyperlink part only.But I just want the css selector for the 1st part without the hyperlink. The HTML for the element is given below:

<div class="qa-field qa-checkbox option option-form-control terms-conditions js-question-terms-conditions">
<div class="terms-conditions-input">
<input id="question-terms-conditions" class="pull-left parsley-validated" type="checkbox" data-validate-error-message="Please agree to the Terms & Conditions to continue" data-validate-mincheck="1" data-validate-required="true" name="terms-conditions"/>
<label class="pull-left" for="question-terms-conditions">
 I agree to the 
<a title="Terms & Conditions" target="_blank" href="http://ift.tt/1JDvcJX"> Terms & Conditions </a>
</label>

Is there any way to create a css selector for only the I agree to the part of the element.

What is the difference in testing types?

Is there a difference in how software is tested in lets say a plan driven approach and an agile approach?

Does models like waterfall use Validation and Verification where as agile use TDD?

Please kindly clarify and provide an example of each if possible so I can understand.

thank you

Scanning project to view code coverage php

I have a php project and want to scan it to view code coverage.

I'm new at QA and testing and can't write test cases, so is there any solution to scan my whole project and generate code coverage percent?

samedi 26 décembre 2015

Providing raw values to IAR CODE

I couldn't figure out any script to pass raw values to Micro Controller using IAR.I cant use existing flash as the amount of data is way too bigger than controller's flash.Is there a way I can take the test values from the text file on My PC and send it to code running on IAR. Thanks.

Testing call of multiple methods in phpspec

In the past i always stumbled across a certain problem with phpspec:

Lets assume i have a method which calls multiple methods on another object

class Caller {
    public function call(){
       $this->receiver->method1();
       ...
       $this->receiver->method2();
    }
}

In BDD i would first write a test which makes sure method1 will be called.

function it_calls_method1_of_receiver(Receiver $receiver){
    $receiver->method1()->shouldBeCalled();
    $this->call();
}

And then i would write the next test to assure method2 will be called.

function it_calls_method2_of_receiver(Receiver $receiver){
    $receiver->method2()->shouldBeCalled();
    $this->call();
}

But this test fails in phpspec because method1 gets called before method2. To satisfy phpspec i have to check for both method calls.

 function it_calls_method2_of_receiver(Receiver $receiver){
    $receiver->method1()->shouldBeCalled();
    $receiver->method2()->shouldBeCalled();
    $this->call();
}

My problem with that is, that it bloats up every test. In this example it's just one extra line but imagine a method which builds an object with a lot of setters. I would need to write all setters for every test. It would get quite hard to see the purpose of the test since every test is big and looks the same.

I'm quite sure this is not a problem with phpspec or bdd but rather a problem with my architecture. What would be a better (more testable) way to write this?

For example:

public function handleRequest($request, $endpoint){
    $endpoint->setRequest($request);
    $endpoint->validate();
    $endpoint->handle();
}

Here i validate if an request provides all necessary info for a specific endpoint (or throw an exception) and then handle the request. I choose this pattern to separate validating from the endpoint logic.

can function test be called integration test at the same?

This is part of example.c from zlib, I initially want to convert them into unit test using check, then I kind of getconfused:

are these tests below are function tests? or they could be called integration tests or unit tests as well?

test_compress(compr, comprLen, uncompr, uncomprLen);
test_deflate(compr, comprLen);
test_inflate(compr, comprLen, uncompr, uncomprLen);
test_large_deflate(compr, comprLen, uncompr, uncomprLen);
test_large_inflate(compr, comprLen, uncompr, uncomprLen);
test_flush(compr, &comprLen);
test_sync(compr, comprLen, uncompr, uncomprLen);
comprLen = uncomprLen;
test_dict_deflate(compr, comprLen);
test_dict_inflate(compr, comprLen, uncompr, uncomprLen);

Selenium WebDriver how to insert text into field

I'm designing tests in Selenium WebDriver Framwework and I would like to insert text into fields in login box.

Here is a website http://ift.tt/1NQfrRE After click on "Zaloguj" button in top right corner, Login box appears. I would like to insert text into E-mail inputfield.

Here is my code:

WebElement emailInput =   driver.findElement(By.xpath("//[@id=\"inputFields\"]"));
emailInput.click();
emailInput.sendKeys("grzegorzrudniak@gmail.com");

After execution I get this error: org.openqa.selenium.ElementNotVisibleException: element not visible

Can anyone help me to insert text into this field? Please take a look at second input in this box, it's called "Hasło". Xpath of this two fields is the same. Additional question is how to insert text into "Hasło" inputfield as well?

Implementation: Office Exam in C# windowsForm

i have to implement a MicrosoftOffice Exam(windows forms and completely Local). i have 3 tables(user,question,mark). users Submit in exam and start(user information added to user table). a random questions will appear to user. questions are like: 1) dear user, you have a word document in desktop, there is a sentence in and you must apply size:24,color:red to this sentence. 2) you have an excel document, you must calculate average in cell c12 ------ my problem is about how can i realize the answer is correct and user completely do them(the way he do, isn't important, just final result is important).

my problem just is about how can i realize the result is true. forexample, a question: create a directory in desktop. i can realize it from this code:

      if (Directory.Exists(Path.Combine(@"C:\Users\Mahsa\Desktop\" + Session.name)))
        {
            Controller.MarkController.AddMark(1, q1, 10, Session.id);
        }
        else
        {
            Controller.MarkController.AddMark(1, q1,0, Session.id);
        }

 and this is my class: 

public partial class PracticalQuestion
{
    public int Id { get; set; }
    public string Text { get; set; }
    public string Parameter { get; set; }
    public string Level { get; set; }
} 

Rails Testing - Expected response to be a

I am trying to better understand the server log to enhance my testing environment. For most of the query I send to the server I (almost) always get a 200 message as follows:

Completed 200 OK in 410ms (Views: 403.0ms | ActiveRecord: 1.7ms)

This (seems to) trigger(s) the following failure in my testing when using the assert_redirected_to method:

Expected response to be a <redirect>, but was <200>

For example, If I update my model "Users" as follows in my controller

def update
  @user = User.find(params[:id])
  if @user.update_attributes(user_params)
  flash[:success] = "Your profil has been updated"
  redirect_to @user
  end
end

I want to test the redirection. I use:

test "friendly forwarding" do
   get edit_user_path(@user) #user is a very basic fixture that I call in the test
   log_in_as(@user)
   assert_redirected_to edit_user_path(@user)
   patch user_path(@user), user: { name:  "Foo Bar", email: "foo@bar.com }
   assert_redirected_to @user #line that makes the test fails
end

What is wrong? Should I use something different than assert_redirected_to method or do I have an issue with my code which should not send back a 200 message?

vendredi 25 décembre 2015

Running Protractor against dev/trunk/master selenium server

The Story:

Protractor itself is coming with a built-in webdriver-manager command line tool:

The webdriver-manager is a helper tool to easily get an instance of a Selenium Server running. Use it to download the necessary binaries with:

webdriver-manager update

Now start up a server with:

webdriver-manager start

According to webdriver-manager binary source code, it uses config.json to download a specific selenium package version into a selenium directory in the protractor package root. For instance, now the config looks:

{
  "webdriverVersions": {
    "selenium": "2.48.2",
    "chromedriver": "2.20",
    "iedriver": "2.48.0"
  }
}

This config is then manually updated when new selenium, chrome or IE driver versions come out.

For this config, running webdriver-manager update would trigger a selenium-server-standalone-2.48.2.jar to be downloaded.

The Question:

Is it possible to have webdriver-manager to install the currently latest/dev/trunk/master selenium version? And, if not, how can I run protractor tests with the latest dev selenium package version?

How to run the v8 Promise tests against my own implementation?

I have implemented a module that should look exactly like a regular ES6 Promise; and I want to test it as such.

The files that test promises in the node tests are here: http://ift.tt/1miOu1L

However I cannot work out how to run these files. After putting in the requisite require for my own module the files fail with syntax errors of missing functions. I seem to be missing some kind of testing suite but can't work out which it is.

What is the use of conftest.py files?

I recently discovered py.test. It seems great. However I feel the documentation could be better.

I'm trying to understand what conftest.py files are meant to be used for.

In my (currently small) test suite I have one conftest.py file at the project root. I use it to define the fixtures that I inject into my tests.

I have two questions:

  1. Is this the correct use of conftest.py? Does it have other uses?
  2. Can I have more than one conftest.py file? When would I want to do that? Examples will be appreciated.

How to set up a testing enviroment with disposable Windows VM images?

So I'm responsible for a lot of web developers who work with Macs. Our company doesn't want to shell out for Parallels/VMWare + Windows 7/8/10 licenses for each machine.

In order for them to be able to test on Internet Explorer, I'd like to set up a Linux or Windows box with a few Windows VM images that they can clone, run and throw away after usage. I want them to connect remotely to the VM. The idea is that they'll always find a standardized and reproducible environment, no matter how much they play around with the settings.

What's a good approach for this? I'm thinking of using QEMU so far, but I'm open to other suggestions. Are there any existing solutions for this, so I don't have to script everything myself? Ideally with a Web or other GUI for the users to clone and connect to a VM?

Mocking - cannot instantiate proxy class of property?

Inside my tests, here is my code:

[SetUp]
        public void Initialise()
        {
            mockOwinManager = new Mock<IOwinManager<ApplicationUser, ApplicationRole>>();
            mockSearch = new Mock<ISearch<ApplicationUser>>();
            mockMail = new Mock<IRpdbMail>();
            mockUserStore = new Mock<IUserStore<ApplicationUser>>();

            mockOwinManager.Setup(x => x.UserManager).Returns(() => new AppUserManager(mockUserStore.Object));

            sut = new UsersController(mockOwinManager.Object, mockSearch.Object, mockMail.Object);
        }

And then the test itself:

    [Test]
    public void WhenPut_IfUserIsNullReturnInternalServerError()
    {
        //Arrange
        mockOwinManager.Setup(x => x.UserManager.FindByIdAsync(It.IsAny<string>())).Returns(() => null);

        //Act
        var response = sut.Put(new AppPersonUpdate());

        //Assert
        Assert.AreEqual(response.Result.StatusCode, HttpStatusCode.InternalServerError);
    }

But my arrange line throws the following error:

Can not instantiate proxy of class: Microsoft.AspNet.Identity.UserManager`1[[SWDB.BusinessLayer.Identity.ApplicationUser, SWDB.BusinessLayer, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]].
Could not find a parameterless constructor.

Why is this so, since in Setup I've set my mockOwinManager's UserManager property in what I'd like it to return already?

Thanks

how to deal with the "fmt" golang library package for CLI testing

Disclaimer: I wish you a merry XMas and I hope my question does not disturb you!

sample.go:

package main

import(
    "fmt"
    "os"
)


type sample struct {
    value int64
}

func (s sample) useful() {
    if s.value == 0 {
        fmt.Println("Error: something is wrong!")
        os.Exit(1)
    } else {
        fmt.Println("May the force be with you!")
    }
}

func main() {
    s := sample{42}
    s.useful()

    s.value = 0
    s.useful()
}

// output:
// May the force be with you!
// Error: something is wrong!
// exit status 1

I did a lot of research on how to use interfaces in golang testing. But so far I was not able to wrap my head around this completely. At least I can not see how interfaces help me when I need to "mock" (apologies for using this word) golang std. library packages like "fmt".

I came up with two scenarios:

  1. use os/exec to test the command line interface
  2. wrap fmt package so I have control and am able to check the output strings

I do not like both scenarios:

  1. I experience going through the actual command line a convoluted and not-performant (see below). Might have portability issues, too.
  2. I believe this is the way to go but I fear that wrapping the fmt package might be a lot of work (at least wrapping the time package for testing turned out a non-trivial task (http://ift.tt/1ItZuDX)).

Actual Question here: Is there another (better/simpler/idiomatic) way? Note: I want to do this in pure golang, I am not interested in the next testing framework.

cli_test.go:

package main

import(
    "os/exec"
    "testing"
)


func TestCli(t *testing.T) {
    out, err := exec.Command("go run sample.go").Output()
    if err != nil {
        t.Fatal(err)
    }
    if string(out) != "May the force be with you!\nError: this is broken and not useful!\nexit status 1" {
        t.Fatal("There is something wrong with the CLI")
    }
}

jeudi 24 décembre 2015

Responding with a given fixture when requesting window.location.href with Teaspoon

I'm trying to write a test in Teaspoon for one of the public functions in my jQuery UI dialog wrapper module.

In this function we're able to create a dialog by requesting window.location.href which pulls back a cached version of the page from the browser, rather than going out and hitting the server:

createFromFragmentId = function (fragmentId, options) {
    var promise = $.ajax({
            url: window.location.href
        });

    promise.done(function (html) {
        var $html = $('#' + fragmentId, html);
        create($html, options); // the function that creates the dialog
    });
},

My problem is that I can't find a way to get Teaspoon to spoof a response from window.location.href, responding with the markup from a fixture file.

This was my last attempt, which didn't work, and began wreaking havoc on the tests that followed:

describe('createFromFragmentId', function () {
    it('creates a dialog from a given ID', function () {
        this.server = sinon.fakeServer.create();

        this.server.respondWith('GET', window.location.href, [
            200, { 'Content-Type': 'text/html; charset=utf-8' },
            $fixture[0].outerHTML
        ]);

        WEBLINC.dialog.createFromFragmentId('fragment-dialog');

        this.server.respond();

        expect(
            _.isEmpty(NAMESPACE.dialog.current().has('#content-5'))
        ).to.equal(false);

        this.server.restore();
    });
});

A nudge in the right direction would be most appreciated.

Testing Android NavigationView menu item with espresso

As part of my automation test using Espresso, I would like to assert a given menu item is visible, and then perform a click on that item. For my visibility check I've tried the following...

onView(viewMatcher...).check(ViewAssertions.matches(isDisplayed()));

Using this, I end up with a NoMatchingViewException.

android.support.test.espresso.NoMatchingViewException: No views in
hierarchy found matching: with id:
com.example.android:id/menuitem_my_item

From what I gathered, it's possible that menu items are not visible on the view hierarchy. Has anyone with more experience testing on Android figured out a way around this?

Unit Testing configurable Mock object

I just learning Test double and I have a problem with implementation of configurable mock object (the expected outputs can be set at runtime and the verification can be done by he mock object itself).

My test code is below. I need to implement configurable mock object without using mock object framework as like as below.

 [TestMethod]

       public void SaveOrderAndVerifyExpectations()
        {

     IShopDataAccess dataAccess = mocks.CreateMock<IShopDataAccess>();
     Order o = new Order(6, dataAccess);
     o.Lines.Add(1234, 1);
     o.Lines.Add(4321, 3);
     // Record expectations
     dataAccess.Save(6, o);
     // Start replay of recorded expectations
     mocks.ReplayAll();

     o.Save();
     mocks.VerifyAll();
 }

Other class where take a place above code.

    public interface IShopDataAccess
    {
        decimal GetProductPrice(int productId);

        void Save(int orderId, Order o);
    }
}


And this is an example about this topic but I don't understand what does it do?

 internal class MockShopDataAccess : IShopDataAccess
    {
        private ImplementationCallback implement_;

        internal MockShopDataAccess(ImplementationCallback callback)
        {
            this.implement_ = callback;
        }

        #region IShopDataAccess Members

        public decimal GetProductPrice(int productId)
        {
            MemberData member = new MemberData("GetProductPrice");
            member.Parameters.Add(new ParameterData("productId", productId));    
            this.implement_(member);

            return (decimal)member.ReturnValue;
        }

        public void Save(int orderId, Order o)
        {
            MemberData member = new MemberData("Save");
            member.Parameters.Add(new ParameterData("orderId", orderId));
            member.Parameters.Add(new ParameterData("o", o));

            this.implement_(member);
        }

        #endregion
    }



As a result, I need to implement configurable mock object at the first code and I dont know what do I need to do? I suppose , I should write a new class ? Can someone lead by example , thanks for any response.

Should I create a factories.py for each app?

I have a project with few apps and bunch of models with every sort of relationships.

I believe that facory-boy could help me keep my project DRY with a factories.py for each app that can be used on tests and seeds. Is it a valid approach?

How to initialize WebDriver in Groovy Selenium

I am trying to initialize WebDriver with an instance of FireFoxDriver to do some automation.

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.firefox.FirefoxDriver;

WebDriver driver = new FirefoxDriver();

However, I am getting this error.

TestCase failed [java.lang.NoClassDefFoundError: Could not initialize class org.apache.http.conn.ssl.SSLConnectionSocketFactory:java.lang.NoClassDefFoundError: Could not initialize class org.apache.http.conn.ssl.SSLConnectionSocketFactory], time taken = 0

I am doing this script within SoapUI as a setup script, as this setup script will be use to initalize some header value. I have already dragged the selenium jar into the SoapUI/bin/ext folder and all of its lib jars.

book formal method software develop

I read two books on the software engineering , and I understand what are the general methods used by team to develop software . Now I would to deepen my knowledge on formal methods, May you suggest what are the best books currently??

Testing in Rails. Seems it deletes records within one test run. Ideas?

Let's take a look at the next scenario: Scenario: Show welcome feed Given log me in When I'm on "profile" page Then Page should have content "You are welcome"

Pretty simple. What about log me in step? Given /^log me in$/ do @user = FactoryGirl.create :user, email: 'user@mail.ru', password: '12345678', username: ' visit sign_in_path fill_in 'user_email', with: 'user@mail.ru' fill_in 'user_password', with: '12345678' click_on 'Enter' end Nothing complex. But when I run this scenario it can't log me in because there is no user with provided email. Lol what? It happens with other scenarios where I trying to create and log in user.

  • It works before, there weren't any changes in scenarios.
  • It works in development and production environment. So it's environment issue I guess.

Any ideas? How to debug this?

How to test Google Analytics Campaign tracking?

I need to track my ios app installs by campaign src. so i am using google Analytics campaign measurement.

but i don't know How should i test the campaign tracking before uploading my app to the appstore ! How should i test it ?

I used before google analytics campaign measurements for android, and they provide a guideline for testing it !

Thanks

How to get Spring Bean from JerseyTest subclass

Here is my abstract class which starts Jersey with given Spring context:

public abstract class AbstractJerseyTest extends JerseyTest {

public void setUp() throws Exception {
    super.setUp();
}

@AfterClass
public void destroy() throws Exception {
    tearDown();
}

@Override
protected URI getBaseUri() {
    return URI.create("http://localhost:9993");
}

@Override
protected Application configure() {
    RestApplication application = new RestApplication();

    Map<String, Object> properties = new HashMap<String, Object>();
    properties.put(ServerProperties.BV_SEND_ERROR_IN_RESPONSE, true);
    properties.put("contextConfigLocation", "classpath:spring-context-test.xml");

    application.setProperties(properties);
    application.register(this);
    return application;
}
}

So, the problem is that I need to access Spring bean from my test to populate database with some data.

Jersey version is 2.6

Also I found a similar question here

But it's related to Jersey 1.x so it doesn't work for Jersey 2.x

Could anyone point me in the right direction?

Selenium Continue Script when Element not Found using Try, Catch & Finally

I am using the below code snippet to Verify the visibility of an Element. If Element is not available i would like to call a method which i have included in catch block. Included a Click Event under Finally Block which i need to perform on both the cases, if an Element is available or After running a menthod when element is not available. However the below code doesn't seems to be working. I am still getting no such element exception even though the element is in try block.

    try
    {
        driver.findElement(By.xpath(".//table[@id='wishlist-table']/tbody/tr/td[5]/div/button")).isDisplayed();
    } 
    catch (NoSuchElementException e) 
    {
        System.err.println("Element not found: " + e.getMessage());
        AddItemtoWishlist();
    }
    finally
    {
        driver.findElement(By.xpath(".//table[@id='wishlist-table']/tbody/tr/td[5]/div/button")).click();
    }

How to test a Web Application completely?

I have a Web Application in Vaadin. I would like to perform some testing. But I'm very new to testing so I do not have an idea about all the kinds of testing that can be done on this application.

For now, I only tried some automated GUI testing using some tools like Selenium IDE, RobotFramework, Ranorex and Rapise. However, Selenium and Robotframework is not completely compatible with Vaadin and I checked TestBench but it the latest version of TestBench, they removed the record and play option. The problem Ranorex and Rapise is that they are licensed products. I prefer using an open source solution for now.

Question is:

  1. What is/are some tool(s) which can easily help me with record & play GUI testing my Vaadin Web Application?
  2. What are some other kinds of testing and the respective tools I could use for testing a Web Application other than record/play GUI tests?

Either one of both answers would be good! Asking for advice as a novice, hope to receive some answers from you experts!

P.S. - I would prefer to use some easy to learn and use testing tool, especially some gui record and play or some tool with less coding (which means I do not prefer tools where I'd have to hard code the tests like in JUnit)

Thanks!

mercredi 23 décembre 2015

Mock directory attributes using c#

I am writing a unit test case for my application. I want to mock Attributes of DirectoryInfo.

Please, any one help me to write mock unit test case for following method. Method:

 public void CheckPermissionOnFolder(string strDrivePath)
    {
        string strObjectName = string.Empty;
        try
        {
            Directory.CreateDirectory(strObjectName).Attributes |= FileAttributes.Hidden;
        }
        catch
        {}
    }

Django - setUpTestData & Many to Many relationship

I need to add a many-to-many relationship in my setUpTestData sequence so that my tests will run correctly.

According to the docs, Many-to-Many relationships cannot be created until an object has been saved, as the primary key must first exist. This means that I cannot set the relationship in setUpTestData with Model.objects.create() as I do with other variables.

Is there any way to include a Many-to-Many relationship in setUpTestData?

Testing function calls that depend on an object returned by a callback

I would like to test the following code:

'use strict';

const amqp = require('amqplib');
const Promise = require('bluebird');

const queueManager = function queueManager() {
  const amqp_host = 'amqp://' + process.env.AMQP_HOST || 'localhost';

  return {
    setupQueue: Promise.method(function setupQueue(queue) {
      return amqp.connect(amqp_host)
        .then(conn => conn.createConfirmChannel())
        .tap(channel => channel.assertQueue(queue));
    }),
    enqueueJob: Promise.method(function enqueueJob(channel, queue, job) {
      return channel.sendToQueue(queue, new Buffer(job));
    }),
    consumeJob: Promise.method(function consumeJob(channel, queue) {
      return channel.consume(queue, msg => msg);
    })
  };
};

module.exports = {
  create: queueManager
}

I want to test my setupQueue, enqueueJob and consumeJob methods to make sure they do the right things to the AMQP server.

For setupQueue for instance, I want to make sure it uses Channel.createConfirmChannel instead of say Channel.createChannel and that it also does Channel.assertQueue.

However, I don't know how to do that.

If I mock the amqp variable with proxyquire, all I'll be able to spy on is the amqp.connect call. I'll probably stub it to avoid hitting any AMQP servers. But what about the following statements? How do I tap into the conn and channel objects?

Appium Inspector Version 1.4.16.1 Error:- Failed to connect to to server, Pls check that it is running

Using Appium on my windows 10 and Geny Motion 2.5.4 for emulator . I am not able to Use Appium Inspector. my appium Version is 1.4.16.1. I been running my appium Inspector while running my Test Code. then also it gives error "Failed to connect to to server, Pls check that it is running". I have set all the Capabilities in my code. Plzz help me to solve this issue.

my appium log

info: Welcome to Appium v1.4.16 (REV ae6877eff263066b26328d457bd285c0cc62430d) info: Appium REST http interface listener started on 127.0.0.1:4723 info: [debug] Non-default server args: {"address":"127.0.0.1","logNoColors":true,"platformName":"Android","platformVersion":"23","automationName":"Appium"} info: Console LogLevel: debug info: --> POST /wd/hub/session {"desiredCapabilities":{"app":"D:\_Projects\_Test Automation\FormsGallery.Android-Signed.apk","appPackage":"FormsGallery.Android","appActivity":"md529130983bd62f4112a07211b98c3bfae.MainActivity","BROWSER_NAME":"Android","VERSION":"4.4.4","platformName":"Android","deviceName":"Emulator"}} info: Client User-Agent string: Apache-HttpClient/4.3.4 (java 1.5) info: [debug] The following desired capabilities were provided, but not recognized by appium. They will be passed on to any other services running on this server. : BROWSER_NAME, VERSION info: [debug] Using local app from desired caps: D:_Projects_Test Automation\FormsGallery.Android-Signed.apk info: [debug] Creating new appium session 091854d6-22d5-4483-a64e-86593cc7b027 info: Starting android appium info: [debug] Getting Java version info: Java version is: 1.8.0_60 info: [debug] Checking whether adb is present info: [debug] Using adb from D:\adt-bundle-windows-x86_64-20140702\sdk\platform-tools\adb.exe info: [debug] Using fast reset? true info: [debug] Preparing device for session info: [debug] Checking whether app is actually present info: Retrieving device info: [debug] Trying to find a connected android device info: [debug] Getting connected devices... info: [debug] executing cmd: D:\adt-bundle-windows-x86_64-20140702\sdk\platform-tools\adb.exe devices info: [debug] 1 device(s) connected info: Found device 10.71.34.101:5555 info: [debug] Setting device id to 10.71.34.101:5555 info: [debug] Waiting for device to be ready and to respond to shell commands (timeout = 5) info: [debug] executing cmd: D:\adt-bundle-windows-x86_64-20140702\sdk\platform-tools\adb.exe -s 10.71.34.101:5555 wait-for-device info: [debug] executing cmd: D:\adt-bundle-windows-x86_64-20140702\sdk\platform-tools\adb.exe -s 10.71.34.101:5555 shell "echo 'ready'" info: [debug] Starting logcat capture info: [debug] Getting device API level info: [debug] executing cmd: D:\adt-bundle-windows-x86_64-20140702\sdk\platform-tools\adb.exe -s 10.71.34.101:5555 shell "getprop ro.build.version.sdk" info: [debug] Device is at API Level 19 info: Device API level is: 19 info: [debug] Extracting strings for language: default info: [debug] executing cmd: D:\adt-bundle-windows-x86_64-20140702\sdk\platform-tools\adb.exe -s 10.71.34.101:5555 shell "getprop persist.sys.language" info: [debug] Current device persist.sys.language: en info: [debug] java -jar "C:\Program Files (x86)\Appium\node_modules\appium\node_modules\appium-adb\jars\appium_apk_tools.jar" "stringsFromApk" "D:_Projects_Test Automation\FormsGallery.Android-Signed.apk" "C:\Users\User02\AppData\Local\Temp\FormsGallery.Android" en info: [debug] No strings.xml for language 'en', getting default strings.xml info: [debug] java -jar "C:\Program Files (x86)\Appium\node_modules\appium\node_modules\appium-adb\jars\appium_apk_tools.jar" "stringsFromApk" "D:_Projects_Test Automation\FormsGallery.Android-Signed.apk" "C:\Users\User02\AppData\Local\Temp\FormsGallery.Android" info: [debug] Reading strings from converted strings.json info: [debug] Setting language to default info: [debug] executing cmd: D:\adt-bundle-windows-x86_64-20140702\sdk\platform-tools\adb.exe -s 10.71.34.101:5555 push "C:\Users\User02\AppData\Local\Temp\FormsGallery.Android\strings.json" /data/local/tmp info: [debug] Checking whether aapt is present info: [debug] Using aapt from D:\adt-bundle-windows-x86_64-20140702\sdk\build-tools\android-4.4W\aapt.exe info: [debug] Retrieving process from manifest. info: [debug] executing cmd: D:\adt-bundle-windows-x86_64-20140702\sdk\build-tools\android-4.4W\aapt.exe dump xmltree "D:_Projects_Test Automation\FormsGallery.Android-Signed.apk" AndroidManifest.xml info: [debug] Set app process to: FormsGallery.Android info: [debug] Not uninstalling app since server not started with --full-reset info: [debug] Checking app cert for D:_Projects_Test Automation\FormsGallery.Android-Signed.apk. info: [debug] executing cmd: java -jar "C:\Program Files (x86)\Appium\node_modules\appium\node_modules\appium-adb\jars\verify.jar" "D:_Projects_Test Automation\FormsGallery.Android-Signed.apk" info: [debug] App already signed. info: [debug] Zip-aligning D:_Projects_Test Automation\FormsGallery.Android-Signed.apk info: [debug] Checking whether zipalign is present info: [debug] Using zipalign from D:\adt-bundle-windows-x86_64-20140702\sdk\build-tools\android-4.4W\zipalign.exe info: [debug] Zip-aligning apk. info: [debug] executing cmd: D:\adt-bundle-windows-x86_64-20140702\sdk\build-tools\android-4.4W\zipalign.exe -f 4 "D:_Projects_Test Automation\FormsGallery.Android-Signed.apk" C:\Users\User02\AppData\Local\Temp\1151123-9176-1ommac3\appium.tmp info: [debug] MD5 for app is ac894dad9066f52ce250cb57ead31bc9 info: [debug] executing cmd: D:\adt-bundle-windows-x86_64-20140702\sdk\platform-tools\adb.exe -s 10.71.34.101:5555 shell "ls /data/local/tmp/ac894dad9066f52ce250cb57ead31bc9.apk" info: [debug] Getting install status for FormsGallery.Android info: [debug] Getting device API level info: [debug] executing cmd: D:\adt-bundle-windows-x86_64-20140702\sdk\platform-tools\adb.exe -s 10.71.34.101:5555 shell "getprop ro.build.version.sdk" info: [debug] Device is at API Level 19 info: [debug] executing cmd: D:\adt-bundle-windows-x86_64-20140702\sdk\platform-tools\adb.exe -s 10.71.34.101:5555 shell "pm list packages -3 FormsGallery.Android" info: [debug] App is installed info: App is already installed, resetting app info: [debug] Running fast reset (stop and clear) info: [debug] executing cmd: D:\adt-bundle-windows-x86_64-20140702\sdk\platform-tools\adb.exe -s 10.71.34.101:5555 shell "am force-stop FormsGallery.Android" info: [debug] executing cmd: D:\adt-bundle-windows-x86_64-20140702\sdk\platform-tools\adb.exe -s 10.71.34.101:5555 shell "pm clear FormsGallery.Android" info: [debug] Forwarding system:4724 to device:4724 info: [debug] executing cmd: D:\adt-bundle-windows-x86_64-20140702\sdk\platform-tools\adb.exe -s 10.71.34.101:5555 forward tcp:4724 tcp:4724 info: [debug] Pushing appium bootstrap to device... info: [debug] executing cmd: D:\adt-bundle-windows-x86_64-20140702\sdk\platform-tools\adb.exe -s 10.71.34.101:5555 push "C:\Program Files (x86)\Appium\node_modules\appium\build\android_bootstrap\AppiumBootstrap.jar" /data/local/tmp/ info: [debug] Pushing settings apk to device... info: [debug] executing cmd: D:\adt-bundle-windows-x86_64-20140702\sdk\platform-tools\adb.exe -s 10.71.34.101:5555 install "C:\Program Files (x86)\Appium\node_modules\appium\build\settings_apk\settings_apk-debug.apk" info: [debug] Pushing unlock helper app to device... info: [debug] executing cmd: D:\adt-bundle-windows-x86_64-20140702\sdk\platform-tools\adb.exe -s 10.71.34.101:5555 install "C:\Program Files (x86)\Appium\node_modules\appium\build\unlock_apk\unlock_apk-debug.apk" info: Starting App info: [debug] Attempting to kill all 'uiautomator' processes info: [debug] Getting all processes with 'uiautomator' info: [debug] executing cmd: D:\adt-bundle-windows-x86_64-20140702\sdk\platform-tools\adb.exe -s 10.71.34.101:5555 shell "ps 'uiautomator'" info: [debug] No matching processes found info: [debug] Running bootstrap info: [debug] spawning: D:\adt-bundle-windows-x86_64-20140702\sdk\platform-tools\adb.exe -s 10.71.34.101:5555 shell uiautomator runtest AppiumBootstrap.jar -c io.appium.android.bootstrap.Bootstrap -e pkg FormsGallery.Android -e disableAndroidWatchers false info: [debug] [UIAUTOMATOR STDOUT] INSTRUMENTATION_STATUS: numtests=1 info: [debug] [UIAUTOMATOR STDOUT] INSTRUMENTATION_STATUS: stream= info: [debug] [UIAUTOMATOR STDOUT] io.appium.android.bootstrap.Bootstrap: info: [debug] [UIAUTOMATOR STDOUT] INSTRUMENTATION_STATUS: id=UiAutomatorTestRunner info: [debug] [UIAUTOMATOR STDOUT] INSTRUMENTATION_STATUS: test=testRunServer info: [debug] [UIAUTOMATOR STDOUT] INSTRUMENTATION_STATUS: class=io.appium.android.bootstrap.Bootstrap info: [debug] [UIAUTOMATOR STDOUT] INSTRUMENTATION_STATUS: current=1 info: [debug] [UIAUTOMATOR STDOUT] INSTRUMENTATION_STATUS_CODE: 1 info: [debug] [BOOTSTRAP] [debug] Socket opened on port 4724 info: [debug] [BOOTSTRAP] [debug] Appium Socket Server Ready info: [debug] [BOOTSTRAP] [debug] Loading json... info: [debug] Waking up device if it's not alive info: [debug] Pushing command to appium work queue: ["wake",{}] info: [debug] [BOOTSTRAP] [debug] json loading complete. info: [debug] [BOOTSTRAP] [debug] Registered crash watchers. info: [debug] [BOOTSTRAP] [debug] Client connected info: [debug] [BOOTSTRAP] [debug] Got data from client: {"cmd":"action","action":"wake","params":{}} info: [debug] [BOOTSTRAP] [debug] Got command of type ACTION info: [debug] [BOOTSTRAP] [debug] Got command action: wake info: [debug] executing cmd: D:\adt-bundle-windows-x86_64-20140702\sdk\platform-tools\adb.exe -s 10.71.34.101:5555 shell "dumpsys window" info: [debug] [BOOTSTRAP] [debug] Returning result: {"value":true,"status":0} info: [debug] Screen already unlocked, continuing. info: [debug] Pushing command to appium work queue: ["getDataDir",{}] info: [debug] [BOOTSTRAP] [debug] Got data from client: {"cmd":"action","action":"getDataDir","params":{}} info: [debug] dataDir set to: /data info: [debug] Pushing command to appium work queue: ["compressedLayoutHierarchy",{"compressLayout":false}] info: [debug] [BOOTSTRAP] [debug] Got command of type ACTION info: [debug] [BOOTSTRAP] [debug] Got command action: getDataDir info: [debug] [BOOTSTRAP] [debug] Returning result: {"value":"/data","status":0} info: [debug] [BOOTSTRAP] [debug] Got data from client: {"cmd":"action","action":"compressedLayoutHierarchy","params":{"compressLayout":false}} info: [debug] [BOOTSTRAP] [debug] Got command of type ACTION info: [debug] [BOOTSTRAP] [debug] Got command action: compressedLayoutHierarchy info: [debug] Getting device API level info: [debug] executing cmd: D:\adt-bundle-windows-x86_64-20140702\sdk\platform-tools\adb.exe -s 10.71.34.101:5555 shell "getprop ro.build.version.sdk" info: [debug] [BOOTSTRAP] [debug] Returning result: {"value":false,"status":0} info: [debug] Device is at API Level 19 info: [debug] executing cmd: D:\adt-bundle-windows-x86_64-20140702\sdk\platform-tools\adb.exe -s 10.71.34.101:5555 shell "am start -S -a android.intent.action.MAIN -c android.intent.category.LAUNCHER -f 0x10200000 -n FormsGallery.Android/md529130983bd62f4112a07211b98c3bfae.MainActivity" info: [debug] Waiting for pkg "FormsGallery.Android" and activity "md529130983bd62f4112a07211b98c3bfae.MainActivity" to be focused info: [debug] Getting focused package and activity info: [debug] executing cmd: D:\adt-bundle-windows-x86_64-20140702\sdk\platform-tools\adb.exe -s 10.71.34.101:5555 shell "dumpsys window windows" info: [debug] executing cmd: D:\adt-bundle-windows-x86_64-20140702\sdk\platform-tools\adb.exe -s 10.71.34.101:5555 shell "getprop ro.build.version.release" info: [debug] Device is at release version 4.4.4 info: [debug] Device launched! Ready for commands info: [debug] Setting command timeout to the default of 60 secs info: [debug] Appium session started with sessionId 091854d6-22d5-4483-a64e-86593cc7b027 info: <-- POST /wd/hub/session 303 6194.012 ms - 74 info: --> GET /wd/hub/session/091854d6-22d5-4483-a64e-86593cc7b027 {} info: [debug] Responding to client with success: {"status":0,"value":{"platform":"LINUX","browserName":"Android","platformVersion":"4.4.4","webStorageEnabled":false,"takesScreenshot":true,"javascriptEnabled":true,"databaseEnabled":false,"networkConnectionEnabled":true,"locationContextEnabled":false,"warnings":{},"desired":{"app":"D:\_Projects\_Test Automation\FormsGallery.Android-Signed.apk","appPackage":"FormsGallery.Android","appActivity":"md529130983bd62f4112a07211b98c3bfae.MainActivity","BROWSER_NAME":"Android","VERSION":"4.4.4","platformName":"Android","deviceName":"Emulator"},"app":"D:\_Projects\_Test Automation\FormsGallery.Android-Signed.apk","appPackage":"FormsGallery.Android","appActivity":"md529130983bd62f4112a07211b98c3bfae.MainActivity","BROWSER_NAME":"Android","VERSION":"4.4.4","platformName":"Android","deviceName":"10.71.34.101:5555"},"sessionId":"091854d6-22d5-4483-a64e-86593cc7b027"} info: <-- GET /wd/hub/session/091854d6-22d5-4483-a64e-86593cc7b027 200 7.728 ms - 870 {"status":0,"value":{"platform":"LINUX","browserName":"Android","platformVersion":"4.4.4","webStorageEnabled":false,"takesScreenshot":true,"javascriptEnabled":true,"databaseEnabled":false,"networkConnectionEnabled":true,"locationContextEnabled":false,"warnings":{},"desired":{"app":"D:\_Projects\_Test Automation\FormsGallery.Android-Signed.apk","appPackage":"FormsGallery.Android","appActivity":"md529130983bd62f4112a07211b98c3bfae.MainActivity","BROWSER_NAME":"Android","VERSION":"4.4.4","platformName":"Android","deviceName":"Emulator"},"app":"D:\_Projects\_Test Automation\FormsGallery.Android-Signed.apk","appPackage":"FormsGallery.Android","appActivity":"md529130983bd62f4112a07211b98c3bfae.MainActivity","BROWSER_NAME":"Android","VERSION":"4.4.4","platformName":"Android","deviceName":"10.71.34.101:5555"},"sessionId":"091854d6-22d5-4483-a64e-86593cc7b027"} info: --> POST /wd/hub/session/091854d6-22d5-4483-a64e-86593cc7b027/window/current/maximize {"windowHandle":"current"} info: [debug] Responding to client with success: {"status":0,"value":"","sessionId":"091854d6-22d5-4483-a64e-86593cc7b027"} info: <-- POST /wd/hub/session/091854d6-22d5-4483-a64e-86593cc7b027/window/current/maximize 200 4.618 ms - 74 {"status":0,"value":"","sessionId":"091854d6-22d5-4483-a64e-86593cc7b027"} info: --> POST /wd/hub/session {"desiredCapabilities":{}} info: Client User-Agent string: undefined error: Failed to start an Appium session, err was: Error: Requested a new session but one was in progress info: [debug] Error: Requested a new session but one was in progress at [object Object].Appium.start (C:\Program Files (x86)\Appium\node_modules\appium\lib\appium.js:139:15) at exports.createSession (C:\Program Files (x86)\Appium\node_modules\appium\lib\server\controller.js:188:16) at Layer.handle [as handle_request] (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\layer.js:82:5) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:110:13) at Route.dispatch (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:91:3) at Layer.handle [as handle_request] (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\layer.js:82:5) at C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\index.js:267:22 at Function.proto.process_params (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\index.js:321:12) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\index.js:261:10) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:100:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at C:\Program Files (x86)\Appium\node_modules\appium\lib\server\controller.js:39:7 at Layer.handle [as handle_request] (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\layer.js:82:5) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:110:13) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:104:14) at Route.dispatch (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\route.js:91:3) at Layer.handle [as handle_request] (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\layer.js:82:5) at C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\index.js:267:22 at Function.proto.process_params (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\index.js:321:12) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\index.js:261:10) at methodOverride (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\method-override\index.js:79:5) at Layer.handle [as handle_request] (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\layer.js:82:5) at trim_prefix (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\index.js:302:13) at C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\index.js:270:7 at Function.proto.process_params (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\index.js:321:12) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\index.js:261:10) at logger (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\morgan\index.js:136:5) at Layer.handle [as handle_request] (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\layer.js:82:5) at trim_prefix (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\index.js:302:13) at C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\index.js:270:7 at Function.proto.process_params (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\index.js:321:12) at next (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\express\lib\router\index.js:261:10) at C:\Program Files (x86)\Appium\node_modules\appium\node_modules\body-parser\lib\read.js:111:5 at done (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\body-parser\node_modules\raw-body\index.js:248:14) at IncomingMessage.onEnd (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\body-parser\node_modules\raw-body\index.js:294:7) at IncomingMessage.g (events.js:199:16) at IncomingMessage.emit (events.js:104:17) at _stream_readable.js:908:16 at process._tickDomainCallback (node.js:381:11) info: [debug] Responding to client with error: {"status":33,"value":{"message":"A new session could not be created. (Original error: Requested a new session but one was in progress)","origValue":"Requested a new session but one was in progress"},"sessionId":"091854d6-22d5-4483-a64e-86593cc7b027"} info: <-- POST /wd/hub/session 500 11.658 ms - 250

Java overriding classes in jar for blackbox testing (with maven)

I have a java maven project in a jar file (bigfat jar, the whole project in one file). I want to blackbox test it. The areas that I am most interested on checking are the input and outputs, alas, those interfaces are APIs and sockets which are hardcoded inside the jar to communicate with a specific port or with a specific website.

What I want to do is to override the classes inside the jar file that are related to these interfaces, and code them with my model interfaces.

The whole new testing project will not go into production, this is purely for testing purposes.

I am using maven.

Any ideas are more than welcome

JUnit tests: How to check for errors with an try-catch block

So, I need to write a test for some (legacy) code I'm improving. In a method, I try to parse some string (which should be legal JSON). Then a possible JSONException is caught if the string doesn't represents valid JSON. Something like:

public void transformToJSON(String source) {
  try {
    JSONObject js = new JSONObject(new JSONTokener(item.getHtml()));
  }
  catch (JSONException e) {
    log(e)
  }
  //than js is added to an Hashset and the method is done
}

So I want to write a test for good input (to see if I have generated a correct JSON-object). This is 'easy' by checking the object in the Set.

For wrong input however, I need to find out if the correct error has been thrown. I know if an error was thrown in the code, I can check for it in the test.

  • By setting the rule public ExpectedException thrown= ExpectedException.none(); and checking for it in test method.
  • By adding @Test(expected = JSONException.class) above the test

But both wont work for try..catch blocks.

How can I test if the proper exception is caught by catch block? I want to change as little of the source code as possible.

Testing system events with Robotium

I have just started using Robotium for Android testing. My question is that,I want to test an application's behavior under some system conditions, such as no internet connection/when there is an incoming call,message,alarm or notification/ battery or memory is low,etc. Is it possible to make test under those conditions using Robotium?

Getting windows only images from the IE Dev Tools site

I think this may be website bug but when I choose VirtualBox and Mac on this page: http://ift.tt/1L32Urr for anything higher than IE9 I get images for virtualbox Windows only.

How to activate logging in plone testing?

In order to test our Zope based application, we use plone.testing.

It works like a charm, but I cannot find out how to get hold on the logfiles.

I installed SiteErrorLog and via pdb and app.error_log I am able to view the logs. But I want the logs from the test run to be written to the hard disk, as it is common for a normal installation.

In our development and also our production setup, the zope.conf defines where the logfiles should be written to. But as far as I know there is no zope.conf for the plone.testing setup.

Any hint is appreciated.

How can I write test case scenarios in Jasmine

I am trying to find an equivalent of Python Testcase scenarios (http://ift.tt/1tEBzIn) where one can list down the scenarios and then the test iterates over all the scenarios.

How to count invisible items with Protractor

I can count visible items by using filter like this:

it('should have correct number of visible columns', function () {
  expect(tableHeaders.filter(function (header) {
      return header.isDisplayed()
  }).count()).toBe(6);
});

But how do I better count invisible ones? This one doesn't work as the header.isDisplayed() returns a Promise, not just a boolean:

it('should have correct number of visible columns', function () {
  expect(tableHeaders.filter(function (header) {
      return !header.isDisplayed()
  }).count()).toBe(6);
});

So, how should I count invisible items the most Protractor way?

How to get list of routes in laravel test case?

Like the the title suggest how to get list of routes in a test case. I know that it can be done in controllers by using Route::getRoutes(). But, this will not work in test cases. Can somebody give me an idea on this.

Running Nunit tests with Jenkins on Raspberry Pi

I would like to install Jenkins on my RaspberryPI and run my Nunit tests with it. Do anyone know the way to configure it? Should I install windows iot or linux? Run tests with mono or nunit? I would be grateful for any help.

How to test that Certificate Pinning works correctly?

I try to use Certificate Pinning in my app with mitmproxy. My app can connect the API server successfully and mitmproxy can not intercept the request. I do not know whether the Certificate Pinning works correctly or not.

I want to ask the test methods to prove Certificate Pinning works correctly?

Difference between monkey testing and ad hoc testing

What is the difference between monkey testing and adhoc testing ?

Test failure: Expected response to be a

I'm a newbie to rails and starting with setting up my first testing suite.

In my tests I keep getting the following failure:

Expected response to be a <redirect>, but was <200>

This response is odd, as the redirect clearly works when testing using the browser. Does anyone have an idea what could be going on here, or what is the best approch in diagnosing such problem.

The full console message

 FAIL["test_should_get_dashboard_when_correct_user", UsersControllerTest, 2015-12-23 18:06:20 +1100]
 test_should_get_dashboard_when_correct_user#UsersControllerTest (1450854380.57s)
Expected response to be a <redirect>, but was <200>
test/controllers/users_controller_test.rb:27:in `block in <class:UsersControllerTest>'

mardi 22 décembre 2015

How to test android abstract activity?

I have a BaseActivity which is an abstract activity and isn't registered in AndroidManifest. BaseActivity will call getPresenter in activity's lifecycle.

public abstract class BaseActivity extends AppCompatActivity{

    public abstract Presenter getPresenter;
    public abstract int getLayout();

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(getLayout());
        getPresenter().attachView(this);
    }

    @Override
    protected void onDestroy() {
        super.onDestroy();
        getPresenter().detachView();
    }
}

I use ActivityTestRule to launch the BaseActivity, but the following error is shown. java.lang.RuntimeException: Could not launch activity

How to test the getPresenter().attachView(this) and getPresenter().detachView() are called in correct activity's lifecycle?