vendredi 30 juin 2017

Running smtpd.DebuggingServer in pytest fails but unittest work?

I am attempting to unit test my emailing functionality using the smtpd package. This package allows the use of a debug server.

When I run this test using pytest it produces an error during collection:

/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/smtpd.py:646: in __init__
type=socket.SOCK_STREAM)
E   TypeError: getaddrinfo() got an unexpected keyword argument 'type'

However if I run this test using unittest, it manages to pass fine. Both of them are run inside the same virtual environment using Python3.6

This is the code I use to instantiate the debugging server and the test functions:

class DebuggingServer(smtpd.DebuggingServer):

    def __init__(self, addr, port):
        smtpd.DebuggingServer.__init__(self, localaddr=(addr, port), remoteaddr=None)


class DebuggingServerThread(threading.Thread):

    def __init__(self, addr='localhost', port=1025):
        threading.Thread.__init__(self)
        self.server = DebuggingServer(addr, port)

    def run(self):
        asyncore.loop(use_poll=True)

    def stop(self):
        self.server.close()
        self.join()


class TestMyEmail(unittest.TestCase):

    server = DebuggingServerThread()

    def setUp(self):

        self.server.start()
        print('Server has started')

    def tearDown(self):

        self.server.stop()
        print('Server has stopped')

    def test_png(self):

        png_files = [os.path.join(DATA_DIR, 'page1.png'),
                 os.path.join(DATA_DIR, 'page2.png')]

        with open(os.path.join(DATA_DIR, 'test_png.txt'), 'w+') as f:
            with redirect_stdout(f):
                success = myemail.mail(recipients=['email@email'],
                                       message="Test email sent from server to test inline attachment of png files",
                                       attachments=png_files,
                                       subject="TestMyEmail.test_png")

        with open(os.path.join(DATA_DIR, 'test_png.eml'), 'w+') as email_file:

            gen = generator.Generator(email_file)
            gen.flatten(success)

        assert type(success)

if __name__ == "__main__":
    unittest.main()

Getting NullPointerException when tried to test two different pages

I am new to automation testing and I have been stuck on testing two different pages (HomePage and AboutPage) Here I have my code http://ift.tt/2u7gH0A Here is stacktrace




java.lang.NullPointerException at org.openqa.selenium.support.pagefactory.DefaultElementLocator.findElement(DefaultElementLocator.java:69) at org.openqa.selenium.support.pagefactory.internal.LocatingElementHandler.invoke(LocatingElementHandler.java:38) at com.sun.proxy.$Proxy11.click(Unknown Source) at pages.HomePage.verifyAboutPage(HomePage.java:37) at tests.AboutPageTest.aboutPage(AboutPageTest.java:27) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80) at org.testng.internal.Invoker.invokeConfigurationMethod(Invoker.java:564) at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:213) at org.testng.internal.Invoker.invokeMethod(Invoker.java:653) at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901) at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231) at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127) at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111) at org.testng.TestRunner.privateRun(TestRunner.java:767) at org.testng.TestRunner.run(TestRunner.java:617) at org.testng.SuiteRunner.runTest(SuiteRunner.java:334) at org.testng.SuiteRunner.access$000(SuiteRunner.java:37) at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:368) at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)


Test ignored.JavaScript error: resource://gre/components/nsUrlClassifierListManager.js, line 287: NS_ERROR_NOT_INITIALIZED: Component returned failure code: 0xc1f30001 (NS_ERROR_NOT_INITIALIZED) [nsIUrlClassifierDBService.getTables] JavaScript error: resource://gre/components/nsUrlClassifierListManager.js, line 287: NS_ERROR_NOT_INITIALIZED: Component returned failure code: 0xc1f30001 (NS_ERROR_NOT_INITIALIZED) [nsIUrlClassifierDBService.getTables] 1498838795121 Marionette INFO Ceased listening


But if I change testng.xml to test one of my classes it's work just fine

Dynamically create test cases in Nightwatch.js

Is there a way to create test cases dynamically in Nightwatch.js?

An example use case:

I would like to run the "Conformance" test suite from the Qual-E test engine and use Nightwatch.js to simply read the results of the test cases from the page. At this moment I have a single module file with each test case defined as a separate function:

module.exports = {
    'AudioContext' : function (browser) {
        // test's code
    },

    ...

    'MediaList.length' : function (browser) {
        // test's code
    }
};

When the "Conformance" test suite from the Qual-E test engine changes (which happens from time to time) I need to update the list of test cases in my module file. I would like to have only a single function inside this module file (e.g. the before function) that will read the Qual-E page as a first step and spawn test cases in runtime, so I will always have an up-to-date test suite.

How to simulate mouseMove on d3 component for testing in react

I would like to test a component that uses D3 which reports on the competent relative coordinates of a mouseMove event. This component works as expected in the browser -- I have a simple callback that logs the coordinates and the a 'Mouse moved' event is also recorded:

import React, { Component } from 'react';
import ReactDOM from 'react-dom';
import { mouse, select } from 'd3-selection';

class Overlay extends Component {
  _handleMouseMove(coordinates, callback) {
    console.log('Mouse moved');
    callback(coordinates);
  }

  componentDidMount() {
    this.renderD3Move();
  }
  componentDidUpdate() {
    this.renderD3Move();
  }

  renderD3Move() {
    const handleMouseMove = this._handleMouseMove;
    const callback = this.props.callback;
    const node = ReactDOM.findDOMNode(this);
    select(node).on('mousemove', function handleMouse() {
      handleMouseMove(mouse(node), callback);
    });
  }

  render() {
    return (
      <rect
        className="overlay"
        height={this.props.height}
        width={this.props.width}
      />
    );
  }
}

export default Overlay;

Despite this, I am unable to write a test that triggers the callback or even get the 'Mouse moved' message to log:

import chai from 'chai';
chai.should();
import React from 'react';
import { mount } from 'enzyme';
import sinon from 'sinon';
import Overlay from '.';

describe('<Overlay />', function() {
  var height = 50,
    width = 50;

  it('passes mouse coordinates when mouse moves', function() {
    const callback = sinon.spy();
    var wrapper = mount(
      <Overlay height={height} width={width} callback={callback} />
    );
    wrapper.find('Overlay').simulate('mousemove');
    wrapper.find('rect').simulate('mousemove');
    wrapper.find('Overlay').simulate('mouseMove');
    wrapper.find('rect').simulate('mouseMove');
    callback.called.should.equal(true);
  });
}

My questions are:

  1. Is there a way to trigger mouseMove in the component from the test?
  2. If so, can I also test the coordinates of the mouseMove event?
  3. If there is a fundamental incompatibility with how I'm implementing the component, is there a best practice for how to determine the coordinates of a mouseMove event relative to the node that can be easily tested?

Related questions that don't fully address what I'm asking

How best to test controller variable assignment of has_many associations?

I am trying and failing to test a controller for variable assignment of the belongs_to objects. These are controller tests and there are a number of areas I could really appreciate with some help on, namely

  1. Should I be writing such tests here and in this way.
  2. If so how could i get it working.

Code as below:

Company.rb

class Company < ApplicationRecord
    belongs_to  :user
    has_many        :employees, inverse_of: :company
    has_many        :quotes, inverse_of: :company
end

Quote.rb

class Quote < ApplicationRecord
    belongs_to :company
end

Employee.rb

class Employee < ApplicationRecord
    belongs_to :company
end

Company has a controller with usual CRUDs, Quote has a controller with Show and Index, Employee does not have a controller. Companies#create creates all three objects and redirect_to's to Quotes#show which renders various attrs from all three models.

I have a Factory Girl factory set up for eahc of the models:

Companies.rb

FactoryGirl.define do
  factory :company do
    sequence(:co_name) { |n| "Acme Co #{n}" }
    co_number "06488522"
    postcode "al1 1aa"
    industry :financial_services

    factory :company2 do
    end

    factory :company3 do
    end
  end
end

Quotes.rb

FactoryGirl.define do
  factory :quote do
    lives_overseas true
    payment_frequency :monthly

    factory :quote2 do
    end

    factory :quote3 do
    end
  end
end 

Employees.rb

FactoryGirl.define do
  factory :employee1, class: Employee do
    first_name "MyString"
    last_name "MyString"
    email "MyString"
    initial "MyString"
    gender "MyString"
    date_of_birth "2017-06-20"
    salary 10000

    factory :employee2 do
    end

    factory :employee3 do
    end
  end
end

And I am trying to write controller tests for Quote#show and to test the assignment of the three objects, i.e.; @company, @quote & @employees to the relataive variables. Code so far as below:

quotes_controller_spec.rb

require 'rails_helper'

RSpec.describe QuotesController, type: :controller do
  let(:user) {FactoryGirl.create(:user) }
  let(:company) { FactoryGirl.create(:company, user: user) }
  let(:employee1) { FactoryGirl.create(:employee1, company: company) }
  let(:employee2) { FactoryGirl.create(:employee2, company: company) }
  let(:employee3) { FactoryGirl.create(:employee3, company: company) }
  let(:quote) { FactoryGirl.create(:quote, company: company) }

  describe "GET #show" do
    it "returns http success" do
      get :show, params: { company_id: company.id, id: quote.id }
      expect(response).to have_http_status(:success)
    end

    it "assigns requested quote to @quote" do
        get :show, params: { company_id: company.id, id: quote.id } #, employee_id: employee1.id
        expect(assigns(:quote)).to eq(quote) # passes fine
        expect(assigns(:company)).to eq(company) # passes fine
        expect(assigns(:employees)).to eq() # fails 
    end
  end
end

I get an error as below:

Failures:

  1) QuotesController GET #show assigns requested quote to @quote
     Failure/Error: let(:employee1) { FactoryGirl.create(:employee1, company: company) } #Why can i not use these factories?

     NoMethodError:
       undefined method `initial=' for #<Employee:0x007f86d8b15470>
       Did you mean?  initialize

I feel like there are a few core things I am not getting right here;

a) I need somehow to declare the associations within Factory Girl b) My tests should somehow be testing the presence of a collection and its assignment to the @employees variable in Quotes#show. c) I am unsure about whether I am crossing 'lines of separation' that perhaps ought to be present because I am testing on other model objects (Company, Quote and Employee) created in Companies#create and rendered in Quotes#show.

Any help and or guidance apprecitated. The afternoon reading and googling leaves me still at a loss as to how I can get my testing strategy right here and the syntax correct for it to work properly.

How to minimize repetition in tox file

Goal: Successfully execute specific tox commands, and have it run for "just" that specific matched command.

Example: tox -e py35-integration

tox should run only for py35-integration and not including the default or standalone py35 definition.

I have tried two different approaches, which as far as I understand are the two ways to go about trying to do what I'm trying to do.

  • note the flake8 command is to easily isolate between different commands and indicate to me what is running. It is not an indication of the command I'm really trying to run.

Furthermore, ini files are only showing relevant parts.

First approach

[tox]
envlist = {py27,py35}, {py27,py35}-integration

[testenv]
commands =
    py27: python -m testtools.run discover
    py35: python -m testtools.run discover
    py27-integration: flake8 {posargs}
    py35-integration: flake8 {posargs}

With this approach, the understanding here is that I want to have tox -e py27-integration run without also running what is defined for the py27 command. This is not what is happening. Instead, it will run both py27 and py27-integration.

Second approach

[tox]
envlist = {py27,py35}, {py27,py35}-integration

[testenv]
commands =
    python -m testtools.run discover

[testenv:integration]
commands = 
    flake8 {posargs}

Now, here I am explicitly isolating for a "sub" environment with its own command to run for "integration".

However, unfortunately, I'm met with the exact same behaviour of all matched patterns of "py27" being executed.

I'm trying to avoid repeating testenv structures as: [testenv:py27-integration] and [testenv:py35-integration], which contain the exact same definitions (goal is to minimize repetition).

I would love to know if there is a way I can achieve what I am trying to do.

I do not want to venture down doing something like p27-integration as an alternative naming scheme, since our CI pipelines have templates expecting certain name structures, and these names are also idiomatic to tox, in that py27 for example is understood to install a 2.7 virtual environment.

Finding bugs in code using mutation testing

I have some understanding how to find a bug using mutants.

So, there is the original code, I make mutants, and check for reachability, infection and propagation, find tests which kill mutants (if they exist), and than what? How should that help me to find a bug in my code?

Unittest WCF service without RoleEnvironment/Telemetry client

I have a WCF Service:

[ServiceTelemetry]
public class MyWCFClass()
{
    private TelemetryClient telemetryClient;

    public MyWCFClass()
    {
        telemetryClient = new TelemetryClient()
        {
            InstrumentationKey = RoleEnvironment.GetConfigurationSettingValue("APPINSIGHTS_INSTRUMENTATIONKEY")
        };
    }
    public string TheMethod()
    {
        return "Nice";
    }
}

Works like a charm. Now I want to unit test this WCF service:

[TestMethod()]
public void DoTest()
{
    MyWCFClass theClass = new MyWCFClass();
    Assert.AreEqual("Nice", theClass.TheMethod());
}

My problem is, that when I run the test, an SEH exception occurs on the constructor of the WCFClass.

Is there any way to fake the RoleEnvironment part or skip the telemetry part?

How to set a test for multiple fetches with Promise.all using jest

I'm using jest for my tests. I'm using react and redux and I have this action:

function getData(id, notify) {
 return (dispatch, ...) => {
   dispatch(anotherFunction());
   Promise.all(['resource1'],['resource2'],['resource3'])
   .then(([response1],[response2],[response3]) => {
        ... handle responses
    })
   .catch(error => { dispatch(handleError(error)); }
 };
}

I've been looking for into the jest documentation how to set a test for this action, but I was unable to find a way. I tried myself something like this:

it('test description', (done) => {
  const expectedActions = [{type: {...}, payload: {...}},{type: {...}, payload: {...}},...];
  fetchMock.get('resource1', ...);
  fetchMock.get('resource2', ...);
  fetchMock.get('resource3', ...);
   ... then the rest of the test calls
});

Unsuccessfully. So how should I proceed?

Generate classes with gradle and jaxb for tests with different configuration

I'm using

id "com.bmuschko.docker-java-application" version "3.0.7"

http://ift.tt/2tt8fLu

gradle plugin and this configuration:

jaxb {
    xsdDir = "${project.projectDir}/src/main/xsd/"
    bindingsDir = "${project.projectDir}/src/main/xsd/"

    xjc {
        taskClassname = "org.jvnet.jaxb2_commons.xjc.XJC2Task"
        args = [
                '-Xfluent-api'
        ]
    }
}

sourceSets.main.java.srcDirs += "${generatedSources}"
compileJava.dependsOn xjc

but for tests I want to add more arguments to xjc. Maven solves this with help of executions

               <plugin>
                    <groupId>org.jvnet.jaxb2.maven2</groupId>
                    <artifactId>maven-jaxb2-plugin</artifactId>
                    <version>0.13.1</version>
                    <executions>
                        <execution>
                            <id>prod</id>
                            <goals>
                                <goal>generate</goal>
                            </goals>
                        </execution>
                        <execution>
                            <id>test</id>
                            <phase>process-test-sources</phase>
                            <goals>
                                <goal>generate</goal>
                            </goals>
                            <configuration>
                                <generateDirectory>target/generated-test-sources/xjc</generateDirectory>
                                <addCompileSourceRoot>false</addCompileSourceRoot>
                                <addTestCompileSourceRoot>true</addTestCompileSourceRoot>
                                <args>
                                    <arg>-Xfluent-api</arg>
                                    <arg>-Xinheritance</arg>
                                    <arg>-Xannotate</arg>
                                    <arg>-Xvalue-constructor</arg>
                                    <arg>-Xequals</arg>
                                    <arg>-XhashCode</arg>
                                    <arg>-XtoString</arg>
                                </args>
                            </configuration>
                        </execution>
                    </executions>
                    <configuration>
                        <schemaDirectory>src/main/xsd</schemaDirectory>
                        <bindingDirectory>src/main/xsd</bindingDirectory>
                        <removeOldOutput>true</removeOldOutput>
                        <extension>true</extension>
                        <verbose>true</verbose>
                        <readOnly>true</readOnly>
                        <args>
                            <arg>-Xfluent-api</arg>
                        </args>
                    </configuration>
                </plugin>

But how to solve it in gradle? I want as result 2 sets of classes in different folders - one for production code and one for tests (with additional jaxb plugins enabled)

How to mock a DNS request? (miekg/dns)

I have this small piece of code to explain the codebase I'm trying to test. I've skipped checking errors to make the question short.

func lastCNAME(domain string) (lastCNAME string) {
        ns := "8.8.8.8:53"

        c := dns.Client{}
        m := dns.Msg{}
        m.SetQuestion(domain, dns.TypeA)
        r, _, _ := c.Exchange(&m, ns)
        // Last CNAME
        for _, ans := range r.Answer {
                cname, ok := ans.(*dns.CNAME)
                if ok {
                        lastCNAME = cname.Target
                }
        }
        return lastCNAME
}

What is the best way to mock the dns query to the nameserver 8.8.8.8?

Here is the full code in case anyone's curious.

How to understand unit test correctly?

Recently I am learning to test React with jest and enzyme, It seems hard to understand what a unit test is it, my code

import React from "react";

class App extends React.Component {
  constructor() {
    super();
    this.state = {
      value: ""
    };
    this.handleChange = this.handleChange.bind(this);
  }

  handleChange(e) {
    const value = e.target.value;
    this.setState({
      value
    });
  }
  render() {
    return <Nest value={this.state.value} handleChange={this.handleChange} />;
  }
}

export const Nest = props => {
  return <input value={props.value} onChange={props.handleChange} />;
};

export default App;

and my test

import React from "react";
import App, { Nest } from "./nest";

import { shallow, mount } from "enzyme";

it("should be goood", () => {
  const handleChange = jest.fn();
  const wrapper = mount(<App />);
  wrapper.find("input").simulate("change", { target: { value: "test" } });
  expect(handleChange).toHaveBeenCalledTimes(1);
});

By my understanding, when I simulate a change event on the input node,a mock handleChnage should be called, cause on the App, there is a handleClick function will be called at the real environment.

IMO, the mocked handleClick will intercept the handleClick on App,

if this is totally wrong, what's the right way to use mock fn and test the handleClick be called.

Another: I search a lot, read the similar situations, seem like this iscontra-Unit Test,
Probably I should test the two component separately, I can test both components, test the
<Nest value={value} handleChange={handleChange} />
by pass the props manually, and then handleChangeinvoked by simulate change
it passed test.
but how can I test the connection between the two? I read

some work is React Team's Work ...

I don't know which parts I have to test in this case, and Which parts react already tested and don't need me to test. That's confusing.

Spring mockMvc throws error when using ExceptionHandler

I am using SpringMockMvc and in my controller I have @ExceptionHandler. When calling post request, I am getting the below error.

Failed to invoke @ExceptionHandler method: public org.springframework.http.ResponseEntity<com.xx.xxx> com.xx.xx.handleException(java.lang.Throwable,org.eclipse.jetty.server.Request)
java.lang.IllegalStateException: Current request is not of type [org.eclipse.jetty.server.Request]: org.springframework.mock.web.MockHttpServletRequest@47fac1bd
        at org.springframework.web.servlet.mvc.method.annotation.ServletRequestMethodArgumentResolver.resolveArgument(ServletRequestMethodArgumentResolver.java:97)

I am not sure why @ExceptionHandler cannot handle if the request is of type org.springframework.mock.web.MockHttpServletRequest

jeudi 29 juin 2017

Location of test automation sources

I'm busy with orientating myself on test automation. Think about Fitnesse, Cucumber or Robot Now I'm doubting where to put my tests. Roughly there are three options.

1) Same project as the application under test 2) In a separate project in the same (Git) repo 3) Seperate project in a seperate (Git) repo.

What is, if there is, common to do? And why?

Unsupported compiler 'org.llvm.8.1.1.clang.wrapper' selected for architecture 'x86_64'Unable to determine concrete GCC compiler for file

When I was working on a test, I connected my cellphone(iphone 5s) and my mac, and then I found that Xcode reported an error. Anyone who had solved this problem?Could you help me?

Unsupported compiler 'org.llvm.8.1.1.clang.wrapper' selected for architecture 'x86_64' Unable to determine concrete GCC compiler for file /Users/http://ift.tt/2t7uGU4 of type sourcecode.c.c. Unable to determine concrete GCC compiler for file /Users/http://ift.tt/2snp4bu of type sourcecode.c.c.

As an outsourced SQA Testing provider: how would you estimate the workload (an therefore the costs) for testing your client's next project?

Let's say you are in charge of an SQA & Testing company that offers outsourced testing services to other companies that develop software. Let's say you offer 4 general types of testing:

  1. Functional Testing
  2. Performance Testing
  3. Security Testing
  4. Usability Testing

Let's assume you use a fixed price model: your client comes with a project with pre-defined requirements and scope, and a fixed duration, which are unlikely to change. You estimate how much it will cost you to provide testing for that project, and then you offer your client a fixed price that hopefully ensures you will gain some profit above your costs.

My question is about how to do the estimation above. How would you go about estimating the costs of doing testing to a given project X for your client? What information about the project would you look at in order to make a good estimation?

To formalize my question a little bit more, let's say you have to define 4 functions:

  • FunctionalCost(X): estimated cost of providing functional testing for project X
  • PerformanceCost(X): estimated cost of providing performance testing for project X
  • SecurityCost(X): estimated cost of providing security testing for project X
  • UsabilityCost(X): estimated cost of providing usability testing for project X

How would you define these functions?

Using TestNG Framework how to launch 2 window browser windows simultaneously

Using TestNG I want to open the same URL in 2 different windows but in the same browser.. and the browser windows shouldn't overlap.. both the windows should launch simultaneously.

Selendroid: Arbitrary App Testable?

I want to test an arbitrary Android app. This app has just the permission for "full network access".

I want to test it either in the emulator or within my smartphone.

I am not the author of that app but have the .apk file.

Can I do automated tests with selendroid?

If not:

what are other alternatives?

Should we assert specific errors when testing validation in model specs?

There are three common ways to test model validations with RSpec and Rails:

  1. expect(post.errors[:title]).to include("can't be blank")
  2. expect(post.errors[:title].size).to eq(1)
  3. expect(post).to be_valid

IMO, the third option is not good as the record might be invalid due to invalid values in one or more attributes, which may not even include the attribute we intend to test.

So, which approach is best in your opinion, the first or the second?.

A few factors to consider:

Only test what you own

Given that the default error messages for built-in validations are set by the developers of the rails framework (we don't own that code), they may change at any given time and if our validation tests expect specific error messages, they will break.

Regarding custom error messages (for built-in validations and custom validators), although they are under our control, the fact that changing those messages will break our tests makes one wonder if the tests aren't too brittle.

Ensure other validation errors do not influence the test result

  • If both the validation we're testing and another validation fail, post.errors[:title].size will be 2, so both first and second options got this covered.
  • If the validation we expect to fail passes and another validation we were expecting to pass fails (both regarding the attribute under test), then our test will pass when it shouldn't. That's likely the argument in favor of asserting specific error messages when testing validations. Can you think of a way to solve this issue without asserting specific error messages?

Relevant opinions:

  • The rspec-rails developers discuss this subject here.
  • The validation tests included in the book Everyday Rails Testing with RSpec by Aaron Sumner do expect specific error messages.

Thanks in advance.

why class dependos on other class not works testng

** my problem i have 3 classes with xml file and i run the code from xml file **
** i wanna run setting after login it related to each other login class runs correctly but when it arrive to setting class it crashes **

public class suiteclass{
 @before test 
@after test
}


public login extends suiteclass
{
@test 
loginpage()
}

public setting extnds suiteclass
{
@test
settingpage()
}

<suite name="myapp">
    <test name="first">
    <classes>
            <class name="cybertalents.loginclass"></class>
            <class name="cybertalents.Settingclass"></class>
        </classes>
    </test>
</suite>

Protractor in headless chrome doesn't execute browser.actions()

My protractor.conf.js(relevant parts)

  capabilities: {
    'browserName': 'chrome',
    'chromeOptions': {
      'args': ['headless', 'disable-gpu']
    }
  }

If i run protractor in normal mode all tests passing. As well as if i replace this piece of code with just map.click().

  browser.actions()
  .mouseDown()
  .mouseMove(map, {x: 500, y: 150})
  .click()
  .mouseDown()
  .perform();

Dropdown is not working in selenium webdriver

I am sending text to Dropdown textbox from Excel file.So it should send text to dropdown textbox and will select the visible particular matched text.

This is the HTML code:

<div id="routingPanelLeft">
<p style="margin-bottom: 1em">Selecting an item will append to end of hierarchy</p>
<div id="srlSelectContainer">
<p>Select an SRL:</p>
<select id="srlSelect" class="routing-select" style="display: none;">
<option/>
<option value="11">AS-HTTS-US-CDN</option>
<option value="20">AS-HTTS-US-CORE</option>
<option value="19">AS-HTTS-US-HW</option>
<option value="15">AS-HTTS-US-LAN-SW</option>
<option value="8">AS-HTTS-US-NMS</option>
<option value="13">AS-HTTS-US-RP</option>
<option value="14">AS-HTTS-US-SEC</option>
<option value="12">AS-HTTS-US-VOICE</option>
<option value="16">AS-HTTS-US-Wireless</option>
<option value="7">AS-HTTS-WANSW</option>
<option value="22">AS-US-Unsupported-KW</option>
<option value="50">ATT-INDIA-KWL</option>
<option value="33">ATT-KWL</option>
<option value="63">BELL-IPCC-UC</option>
<option value="52">BWI-GMBH-KWL</option>
<option value="65">CISCO_IT_HTTS</option>
<option value="27">EM1-KWL</option>
<option value="29">EM2-KWL</option>
<option value="36">EMC_SOS</option>
<option value="6">HTTS</option>
<option value="55">HTTS-AMAZON</option>
<option value="57">HTTS-CNS-KWL</option>
<option value="35">HTTS-COMCAST</option>
<option value="350">HTTS-Charter</option>
<option value="25">HTTS-EUEM</option>
<option value="58">HTTS-HSBC</option>
<option value="61">HTTS-HSBC-NCMKW</option>
<option value="43">HTTS-IOX</option>
<option value="42">HTTS-IPCC</option>
<option value="47">HTTS-IPCC-VZW</option>
<option value="68">HTTS-IT</option>
<option value="269">HTTS-MICROSOFT-SP</option>
<option value="60">HTTS-MSGNS</option>
<option value="56">HTTS-Mobility</option>
<option value="24">HTTS-Optus</option>
<option value="54">HTTS-PlatinumPlus</option>
<option value="51">HTTS-RPLAN-RNT</option>
<option value="53">HTTS-RUSSIAN</option>
<option value="49">HTTS-SP</option>
<option value="44">HTTS-SV</option>
<option value="45">HTTS-SVTAC</option>
<option value="26">HTTS-TEST</option>
<option value="10">HTTS-UCC</option>
<option value="66">HTTS-VIDEO</option>
<option value="48">HTTS-VTACPCMM</option>
<option value="41">HTTS-WAN</option>
<option value="70">HTTS-WANSW</option>
<option value="289">HTTS-WIRELESS</option>
<option value="34">HTTS_DT</option>
<option value="369">Jabber Support</option>
<option value="290">LATAM-HTTS</option>
<option value="309">LATAM-HTTS-EMEAR</option>
<option value="2">Lan Switching</option>
<option value="59">MDS_WW-SAN</option>
<option value="3">MultiService Specialization Queues</option>
<option value="64">Nexus-Amzn-kwl</option>
<option value="30">SEG1-KWL</option>
<option value="4">Security</option>
<option value="249">TSA</option>
<option value="62">TWC-SP</option>
<option value="37">VZB_APAC</option>
<option value="38">VZB_EMEA</option>
<option value="39">WW-SOS</option>
<option value="40">WW-SOS-CIM_CR2</option>
<option value="32">WW-SOS-HTTS</option>
<option value="31">WW-SOS-TAC</option>
<option value="46">htts-nexus7k</option>
<option value="389">test_1234</option>
</select>
<span class="custom-combobox">
<span class="ui-helper-hidden-accessible" role="status" aria-live="polite">45 results are available, use up and down arrow keys to navigate.</span>
<input class="custom-combobox-input ui-widget ui-widget-content ui-state-default ui-corner-left ui-autocomplete-input" title="" autocomplete="off"/>
<a class="ui-button ui-widget ui-state-default ui-button-icon-only custom-combobox-toggle ui-corner-right" tabindex="-1" title="Show All Items" role="button" aria-disabled="false">
<span class="ui-button-icon-primary ui-icon ui-icon-triangle-1-s"/>
<span class="ui-button-text"/>
</a>
</span>
</div>

<ul id="ui-id-2" class="ui-autocomplete ui-front ui-menu ui-widget ui-widget-content ui-corner-all" tabindex="0" style="display: block; width: 282px; top: -227.3px; left: 268.8px;">
<li class="ui-menu-item" data-type="undefined" data-id="11" role="presentation">
<a id="ui-id-27599" class="ui-corner-all" tabindex="-1">AS-HTTS-US-CDN</a>
</li>
<li class="ui-menu-item" data-type="undefined" data-id="20" role="presentation">
<a id="ui-id-27600" class="ui-corner-all" tabindex="-1">AS-HTTS-US-CORE</a
</ul>

I have tried Select, Keys, Action class, SendKeys methods. But it is not selecting the value from the dropdown. It is sending text to the textbox , which shows results matching text options from which it should select the Value that I am sending from Excel file.

These are my efforts: WebDriverWait wait11 = new WebDriverWait(driver,5); wait11.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("//[@id='srlSelectContainer']/span/input"))).click(); driver.findElement(By.xpath("//[@id='srlSelectContainer']/span/input")).sendKeys(testData);

    /*WebDriverWait wait11 = new WebDriverWait(driver,5);
    wait11.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("//*[@id='srlSelectContainer']/span/input"))).sendKeys(testData);
    WebDriverWait wait1= new WebDriverWait(driver,5);
    wait1.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("//*[@id='srlSelectContainer']/span/input"))).sendKeys(Keys.ARROW_DOWN,Keys.ENTER);
    WebDriverWait wait2= new WebDriverWait(driver,5);
    wait2.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("//*[@id='srlSelectContainer']/span/input"))).clear();
    */

WebDriverWait wait = new WebDriverWait(driver, 10);

     wait.until(ExpectedConditions.elementToBeClickable(By.xpath("//*[@id='srlSelectContainer']/span/input"))).click();*/

WebDriverWait wait2= new WebDriverWait(driver,5); wait2.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("//[@id='srlSelectContainer']/span/input"))).sendKeys(testData); driver.findElement(By.xpath("//[@id='srlSelectContainer']/span/input")).sendKeys(Keys.ARROW_DOWN,Keys.ARROW_DOWN,Keys.ARROW_DOWN,Keys.ARROW_DOWN,Keys.ARROW_DOWN,Keys.ARROW_DOWN,Keys.ARROW_DOWN,Keys.ARROW_DOWN,Keys.ARROW_DOWN,Keys.ARROW_DOWN,Keys.ARROW_DOWN,Keys.ARROW_DOWN,Keys.ENTER);

Please suggest me any way as it is a roadblock.

Response message: com.mongodb.MongoException: can't find a master

I tried connecting mongodb server through jmeter mongodb source config but each time it showing the above error(Response message: com.mongodb.MongoException: can't find a master)

Angular testing async pipe does not trigger the observable

I want to test a component that uses the async pipe. This is my code:

@Component({
  selector: 'test',
  template: `
    <div></div>
  `
})
class AsyncComponent {
  number = Observable.interval(1000).take(3)
}

fdescribe('Async Compnent', () => {
  let component : AsyncComponent;
  let fixture : ComponentFixture<AsyncComponent>;

  beforeEach(
    async(() => {
      TestBed.configureTestingModule({
        declarations: [ AsyncComponent ]
      }).compileComponents();
    })
  );

  beforeEach(() => {
    fixture = TestBed.createComponent(AsyncComponent);
    component = fixture.componentInstance;
    fixture.detectChanges();
  });


  it('should emit values', fakeAsync(() => {

    tick(1000);
    fixture.detectChanges();
    expect(fixture.debugElement.query(By.css('div')).nativeElement.innerHTML).toBe('0');

});

But the test failed. It's seems Angular does not execute to Observable from some reason. What I am missing?

When I'm trying to log the observable with the do operator, I don't see any output in the browser console.

PowerMockito and CORBA conflict

I am trying to use PowerMockito for doing some testing on a project which uses a third party library which uses CORBA. I am trying to skip this exception by finding the source of the problem. I didn't succeed to find information about this matter.

This is the simplest code that I can't make it work:

Java class

@RunWith(PowerMockRunner.class)
public class ORBTest {

    @Test
    public void initORB() throws Exception {
        ORB.init();
    }    
}

pom.xml

<project xmlns="http://ift.tt/IH78KX" xmlns:xsi="http://ift.tt/ra1lAU"
    xsi:schemaLocation="http://ift.tt/IH78KX http://ift.tt/VE5zRx">

    <modelVersion>4.0.0</modelVersion>
    <groupId>org.test</groupId>
    <artifactId>test</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <dependencies>    
        <dependency>
            <groupId>org.powermock</groupId>
            <artifactId>powermock-module-junit4</artifactId>
            <version>1.6.6</version>
        </dependency>
        <dependency>
            <groupId>org.powermock</groupId>
            <artifactId>powermock-api-mockito2</artifactId>
            <version>1.6.6</version>
        </dependency>
    </dependencies>
</project>

Log.error

java.lang.LinkageError: loader constraint violation: 
when resolving method "sun.corba.EncapsInputStreamFactory.newEncapsInputStream(Lorg/omg/CORBA/ORB;[BI)Lcom/sun/corba/se/impl/encoding/EncapsInputStream;" the class loader (instance of org/powermock/core/classloader/MockClassLoader) of the current class, com/sun/corba/se/impl/ior/ObjectKeyFactoryImpl, and the class loader (instance of <bootloader>) for resolved class, sun/corba/EncapsInputStreamFactory, have different Class objects for the type /CORBA/ORB;[BI)Lcom/sun/corba/se/impl/encoding/EncapsInputStream; used in the signature
    at com.sun.corba.se.impl.ior.ObjectKeyFactoryImpl.create(ObjectKeyFactoryImpl.java:222)
    at com.sun.corba.se.impl.resolver.BootstrapResolverImpl.<init>(BootstrapResolverImpl.java:59)
    at com.sun.corba.se.spi.resolver.ResolverDefault.makeBootstrapResolver(ResolverDefault.java:75)
    at com.sun.corba.se.impl.orb.ORBConfiguratorImpl.initializeNaming(ORBConfiguratorImpl.java:419)
    at com.sun.corba.se.impl.orb.ORBConfiguratorImpl.configure(ORBConfiguratorImpl.java:151)
    at com.sun.corba.se.impl.orb.ORBImpl.postInit(ORBImpl.java:483)
    at com.sun.corba.se.impl.orb.ORBImpl.set_parameters(ORBImpl.java:527)
    at com.sun.corba.se.impl.orb.ORBSingleton.getFullORB(ORBSingleton.java:453)
    at com.sun.corba.se.impl.orb.ORBSingleton.getORBData(ORBSingleton.java:632)
    at com.sun.corba.se.spi.orb.ORB.getLogger(ORB.java:477)
    at com.sun.corba.se.spi.orb.ORB.getLogWrapper(ORB.java:523)
    at com.sun.corba.se.impl.logging.ORBUtilSystemException.get(ORBUtilSystemException.java:54)
    at com.sun.corba.se.spi.orb.ORB.<init>(ORB.java:281)
    at com.sun.corba.se.impl.orb.ORBSingleton.<init>(ORBSingleton.java:135)
    at org.omg.CORBA.ORB.init(ORB.java:292)
    at org.test.test.MockitoTest.test(MockitoTest.java:34)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
    at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:310)
    at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:89)
    at org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:97)
    at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:294)
    at org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:131)
    at org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.access$100(PowerMockJUnit47RunnerDelegateImpl.java:59)
    at org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner$TestExecutorStatement.evaluate(PowerMockJUnit47RunnerDelegateImpl.java:147)
    at org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.evaluateStatement(PowerMockJUnit47RunnerDelegateImpl.java:107)
    at org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82)
    at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:282)
    at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:87)
    at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:50)
    at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:202)
    at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:144)
    at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:118)
    at org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34)
    at org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44)
    at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:120)
    at org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.run(JUnit4TestSuiteChunkerImpl.java:121)
    at org.powermock.modules.junit4.common.internal.impl.AbstractCommonPowerMockRunner.run(AbstractCommonPowerMockRunner.java:53)
    at org.powermock.modules.junit4.PowerMockRunner.run(PowerMockRunner.java:59)
    at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
    at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)

I'd appreciate if someone could shed me some light here since y don't know what to change to make this work.

mercredi 28 juin 2017

How can I write a test function for creating user in python ?

I have to create test functions for my site which returns some value. How to create that.

How do GUI automation tools work internally?

Without diving too much into trade secrets and such, I am really curious how test automation tools extract object level information from an application under test. From what I research online from the MS Automation documentation, the tool would have to interact with the client API layer of MS automation? I read elsewhere that some tools inject their own methods to extract that information as a hierarchy?

Handling global imports with Jest, React, and Webpack

I recently started working on an already mature project where I am integrating the Jest testing framework. The project uses React with Webpack. In the webpack config, the team is using webpack.ProvidePlugin to provide React, lodash, and classnames (not what I would have done, but whatever).

The problem I'm running into, is that most of the React classes at the top of the file contain something like React.Component, which works fine because of the webpack base config. However, when I run the corresponding test file, I just get React is not defined unless I explicitly require React.

There is a webpack.test.config file that requires the base webpack config, but nevertheless I can't seem to get it so that I can run the tests without explicitly including import React from 'react' at the top of every file. I don't want to have to go manually import React in every file, anyone have any suggestions of what to do?

Rails: Testing file upload validation at model spec

The following code tests image validation within a model spec in a Rails 4.2 app with RSpec 3.5 and the Shrine gem for file uploads.

My questions are:

  • Can you think of a way to improve the following tests or a better way to test those validations?
  • How to improve the speed of the file size validation test? If possible, I'd like to test it without actually having to upload a >10mb file.

Other aspects of the file upload setup are tested in controller and feature specs, which are irrelevant to this question.

RSpec.describe ShareImage, :type => :model do
  describe "#image", :focus do
    let(:image_file) do
      # Could not get fixture_file_upload to work, but that's irrelevant
      Rack::Test::UploadedFile.new(File.join(
        ActionController::TestCase.fixture_path, 'files', filename))
    end
    let(:share_image) { FactoryGirl.build(:share_image, image: image_file) }
    before(:each) { share_image.valid? }

    context "with a valid image file" do
      let(:filename) { 'image-valid.jpg' }
      it "attaches the image to this record" do
        expect(share_image.image.metadata["filename"]).to eq filename
      end
    end

    context "with JPG extension and 'text/plain' media type" do
      let(:filename) { 'image-with-text-media-type.jpg' }
      it "is invalid" do
        expect(share_image.errors[:image].to_s).to include("invalid file type")
      end
    end

    # TODO: Refactor the following test (it takes ~50 seconds to run)
    context "with a >10mb image file" do
      let(:filename) { 'image-11mb.jpg' }
      it "is invalid" do
        expect(share_image.errors[:image].to_s).to include("too large")
      end
    end
  end
end

Wirte android espresso test

I need to write a UI test to validate that clicking the floating action button results in displaying the SecondActivity.

public class MainActivityTest {

  @Rule
  public ActivityTestRule<MainActivity> mActivityRule = new ActivityTestRule<>(
        MainActivity.class);

  @Before
  public void setUp() throws Exception {

  }

  @Test
  public void onClick() throws Exception {
      onView(withId(R.id.button)).perform(click());
  }
}

Is it enough?

And how can I validate that the Activity properly displays the text contents of an incoming object: name, age, phone number? I have just started using espresso(

Splitting data into test and train and not including all the column

I am working with the Auto data set in the ISLR library. How do I split the data into 75% train and 25% test. I think I split it right but I cant figure out howo to not include the two columns

Create a train and a test set

a. 75% train, 25% test

b. set seed to 1234 to get reproducible results

c. do not include columns “name” and “mpg” in the train and test sets

   splitTheData <- sample(nrow(Default), nrow(Default)*0.75, replace=FALSE)
   #3b
   set.seed(1234)

   #3c

Why are my Laravel Dusk tests failing to access my site using Jenkins?

I currently have a few browser tests using Laravel Dusk and they run fine on homestead. I'm using Jenkins for continuous integration and while my unit tests run fine, all the Dusk tests fail.

I made sure to run Xvfb before, so I got a screenshot of what Jenkins is "seeing" while trying to run the tests:

enter image description here

What could be the issue? Is it something related to the .env file or maybe DuskTestCase.php?

This is my DuskTestCase.php:

abstract class DuskTestCase extends BaseTestCase
{
    use CreatesApplication;

    /**
     * Prepare for Dusk test execution.
     *
     * @beforeClass
     * @return void
     */
    public static function prepare()
    {
        static::startChromeDriver();
    }

    /**
     * Create the RemoteWebDriver instance.
     *
     * @return \Facebook\WebDriver\Remote\RemoteWebDriver
     */
    protected function driver()
    {
        return RemoteWebDriver::create(
            'http://localhost:9515', DesiredCapabilities::chrome()
        );
    }
}

And this is an example of a test I'm trying to run:

    /** @test */
    public function see_login_page()
    {
        $this->browse(function (Browser $browser) {
            $browser->visit('/')
                ->assertSee('Register Now!');
        });
    }

Testia Tarantula import test cases from excel

Is there a way to do it? Can someone please let me know. Any link to a tutorial is also appreciated.

Thanks.

How to do automatic, on device tests for react native application?

I want to create automatic tests for an application written in react native. I want to test only logic (not the UI).

Jest seems to be great solution for unit or even integration tests which will be run on a computer. But I would like to test the application on a real device. I have a component without UI that does some logic, wireless communication with other devices etc. I need to test that communication especially, which cannot be done without a device.

Are there any frameworks or standard solutions to do such thing?

Is there a way to properly experiment with Solr field-types?

I'm working with Solr for a basic search engine, and I've created a couple different fieldTypes that include various filters and tokenizers in their analyzer chains.

However, I'm finding it very difficult to assess how these components of the chain interact and when I query in the Solr Admin, I consistently get different results than I expect-- with no clue as to why.

Is there a way to see what a phrase like education:"x university" is being transformed into when I type it in the q section of the Admin?

Also, when the phrase goes through the chain can it be transformed into multiple things that are all searched or is it just a single modified phrase?

Thanks for any help!

AngularJS testing with RxJS 5 debounceTime

I Have a Component that renders an label and input And on the $postLink I add an Overver to my input focus, and if the focus event is triggered, I set an internal variable focused to true

$postLink() { 
    Rx.Observable.fromEvent(myInputElement, 'focus')
      .debounceTime(200)
      .subscribe(() => {
      this.focused = true;
      this.$scope.$evalAsync();
    });
}

In my test file (I'm using Jasmine), I have something like this:

it('must set "focused" to true when input get focused', () => {
  // The setupTemplate return a angular.element input, my component controller and scope
  const { input, controller, parentScope } = setupTemplate();
  input[0].dispatchEvent(new Event('focus'));
  $timeout.flush(600);
  parentScope.$digest();
  expect(controller.focused).toBe(true);
});

But my test fails.

If I remove the debounceTime from my component method, the test pass.

What can I do to simulate the debounceTime?

why before and after class not works in testng

this is the suite class
public class suiteclass {
WebDriver driver;
@BeforeClass
public void beforeSuite() {
System.setProperty("webdriver.gecko.driver", "E:/Software Testing/Selenium/Programs/geckodriver.exe");
driver = new FirefoxDriver();
driver.get("http://52.211.167.71");
driver.manage().timeouts().implicitlyWait(15, TimeUnit.SECONDS);
} @AfterClass
public void aftersuite()
{ driver.close();
}
}

public class loginclass extends suiteclass{
@Test
public void loginpage() {
loginpage login = new loginpage(driver);
login.presslogin();
login.sendkeys(" "," ");
login.loginbutton();
login.seetingpage();
}
}

login page not read the before and after in suiteclass

Testing mounted components with react-router v4

I'm migrating our application to react-router@4.x, and I'm having troubles with getting the tests that are using enzyme's mount function.

Whenever I mount a component that has a withRouter-wrapped sub-component, I run into trouble.

Here's an example of a simplified situation of what I'm testing:

class SomePage extends React.Component {
  componentDidMount() {
    this.props.getData();
  }
  render() {
    return (
      <div>
        <ComponentWrappedInWithRouter />
      </div>
    );
  }
}

const Component = (props) => (
  <button onClick={() => this.props.history.push('/new-route') }>
    Click to navigate
  </button>
);

const ComponentWrappedInWithRouter = withRouter(Component);

Now, I want to test that SomePage calls its prop getData after it's mounted. Stupid example, I know, but the structure should be sufficient.

Whenever I write a test that uses enzyme's mount, I get the following error:

TypeError: Cannot read property 'route' of undefined
  at Route.computeMatch (node_modules/react-router/Route.js:68:22)
  at new Route (node_modules/react-router/Route.js:47:20)

Now, I think the reason this happens is because I try to call withRouter for a component that does not have router on the context - i.e. it has not been wrapped in <BrowserRouter /> or one of its siblings.

I can fix this by wrapping my component in a <BrowserRouter /> at test time, but that isn't a very nice fix.

This wasn't an issue with react-router@3.x, but since the current major was a full rewrite, I totally understand that these issues will arise.

Any ideas as to how I can fix this?

SMS API with a sandbox/test environment

I need an API for SMS sending that has a sandbox/test environment. What I mean is that i should be able to initialize the api with some option sandbox = true or with a sandboxed api key so that in this case the messages are not sent but they can be browsed is some kind of interface for testing/reviewing purpose. Much like you do with things like mailtrap.io

P.s. many SMS API providers have dropped such features saying: you can use inversion of control to build your own sandbox!!1 I don't want to discuss the... logic (or lack thereof) of this reasoning, but this is not what I'm looking for and it's not an acceptable solution.

When calling pytest on a directory, doesn't show captured stdout upon test failure

If I use pytest on a singular test file:

pytest -v tests/file_to_test.py

Then my logger will be captured in stdout. However, if I run pytest on the whole directory of tests:

pytest -v tests/file_to_test.py

I do not get the captured stdout.

For the logging, I am using

logging.basicConfig(level=logging.INFO, stream=sys.stdout)
logger = logging.getLogger(__name__)
...
logger.info('show in stdout')

and if I use a plain print statement, the captured stdout is shown.

I have tried pytest-capturelog and pytest-catchlog and neither work for me when I run the whole directory. Any ideas?

EDIT: -s also isn't want I am looking for here.

scenario outline in robot framework with Gherkin

I want to pass multiple arguments to my TEST CASE in the most "elegant" way possible in Robot Framework. Scenario outline used in many framework allows to replace variable/keywords with the value from the table. Each row in the table is considered to be a scenario.

Is there any equivalent for scenario outline in Robot Framework?

les différents tests de types boites blanches

bonjour,

je connais que deux exemples de test de types boites blanches ( test unitaire et test d'intégration) et je veux plus d'exemple de test de type boite blanche.

merci.

Angular 2/4 Karma testing

I have an error when trying to import a typescript class to my test file. This is my karma.conf.js

// Karma configuration
// Generated on Mon Jun 12 2017 14:21:41 GMT+0200 (Central European Daylight Time)

module.exports = function (config) {
    config.set({

        // base path that will be used to resolve all patterns (eg. files, exclude)
        basePath: 'src/app',


        // frameworks to use
        // available frameworks: http://ift.tt/1ft83uu
        frameworks: ['jasmine'],


        // list of files / patterns to load in the browser
        files: [
            { pattern: '**/*.spec.ts' }
        ],

        // list of files to exclude
        exclude: [
        ],

        // preprocess matching files before serving them to the browser
        // available preprocessors: http://ift.tt/1gyw6MG
        preprocessors: {
            '**/*.ts': ['typescript']
        },

        // test results reporter to use
        // possible values: 'dots', 'progress'
        // available reporters: http://ift.tt/1ft83KQ
        reporters: ['progress'],


        // web server port
        port: 9876,


        // enable / disable colors in the output (reporters and logs)
        colors: true,


        // level of logging
        // possible values: config.LOG_DISABLE || config.LOG_ERROR || config.LOG_WARN || config.LOG_INFO || config.LOG_DEBUG
        logLevel: config.LOG_INFO,


        // enable / disable watching file and executing tests whenever any file changes
        autoWatch: true,


        // start these browsers
        // available browser launchers: http://ift.tt/1ft83KU
        browsers: ['Chrome'],


        // Continuous Integration mode
        // if true, Karma captures browsers, runs the tests and exits
        singleRun: false,

        // Concurrency level
        // how many browser should be started simultaneous
        concurrency: Infinity,

        mime: {
            'text/x-typescript': ['ts', 'tsx']
        }
    });
};

I have wrote a simple test which does nothing but tries to import a class:

import { ApiResponseTimeRepository } from './ApiResponseTimeRepository';


describe('testing test', () => {
    it('should pass', () => {
        expect(1 + 1).toBe(2);
    });
});

But when I run the test I get Uncaught ReferenceError: exports is not defined at TestRepositories.spec.js:2.

Test works fine when I omit the import part, so I figured there was a problem with typescript transpiling, so I installed karma-typescript-preprocessor but I keep getting the error. What seems to be the problem?

Nesting or grouping tests in pgTAP

I recently started working on a project that uses pgtap for testing custom plpgsql functions. To write cleaner test with a more readable output, I was wondering if it is possible to nest or group tests. What I'm looking for, is an output similar the following:

       with this configuration
ok 1 -   it can do task 1
ok 2 -   it can do task 2
       with other configuration
ok 3 -   it can do task 3
ok 4 -   it can do task 4

The configuration could be variables, that are shared between the individual test, or just properties, that the queries in the tasks have in common. I know this nesting and grouping from other test suites like RSpec and always found it quite useful, since the output is very descriptive and readable.

What I'm doing at the moment to at mimic the desired output, is adding pass() tests and manually indent the description strings, as seen below. This works but is not ideal in my opinion, since it increases the number of tests.

PREPARE shared_want AS some_query;

SELECT pass('with this configuration');

PREPARE task1_have AS some_query;
SELECT results_eq('task1_have',
                  'shared_want',
                  '  it can do task 1');

PREPARE task2_have AS some_query;
SELECT results_eq('task2_have',
                  'shared_want',
                  '  it can do task 2');

Is there a way to accomplish this in a better way?

Thanks, Felix

Can't able to locate an element using appium?

I was trying to select a particular element in my mobile app to enter mobile number.in my app i have done all other works..but i can't able to locate my Mobile number field to enter Mobile number. currently it was performing the click operation on to a country code spinner prefix to my mobile number field..

The input field in my app is given below

enter image description here

The UI automator view is given below

enter image description here

The codes i tried are

 driver.findElement(By.xpath("//android.widget.EditText[@text='Mobile Number']")).sendKeys("8129497946");

and

driver.findElement(By.xpath("//android.widget.EditText[contains(@resource-id,'ext-element-65')']")).sendKeys("8129497946");

any solutions?

Testing jCache with Hazelcast Provider

How can I achieve unit tests for caching using jCache and HazelcastProvider?

Currently I'm getting this error:

Caused by: java.lang.IllegalStateException: Unable to connect to any address in the config! The following addresses were tried: [localhost/127.0.0.1:5701, localhost/127.0.0.1:5702, localhost/127.0.0.1:5703]

Simplified code below:

@RunWith(SpringJUnit4ClassRunner.class)
@TestPropertySource("classpath:provider.properties")
@ContextConfiguration(classes = {
    CustomCacheConfiguration.class
})

I have cache declared in hazelcast.xml file along with multicast, so as I understand it correctly, I should have an embedded instance.

mardi 27 juin 2017

Not able to create spy on method

My service is

var testappserviceone = angular.module('testappserviceonemodule',[])

testappserviceone.factory('testappServiceone',[function(){
    return function (x,y) {

        function addobj (x,y){
            return x+y;
        }

        return{
            add:addobj
        }
    }
}]);

My test suite is

describe('testappserviceone add method functionality', function(){

    beforeEach(function () {
        spyOn(testappServiceone(10,15),'add').and.callThrough();
    });

    it('testappServiceone add method functionality', function() {
        testappServiceone(10,15).add();
        expect(testappServiceone(10,15).add).toHaveBeenCalled();
    });

});

The Exception I am getting is

Expected a spy, but got Function.

I think spy() is not creating a spy . Please help me in understanding and solving this problem.

What is the difference between test case, test plan, test suite and test scenario?

What is the difference between a test plan, test suite, test case and test scenario? Is there any format that Quality Assurance team follows or a generalise format to follow?

What can QA do productive when the AUT is down/unavailable?

I could think of the following - - look at splunk logs to see the cause - learn about any deployment issues - test negative scenarios which can validate if these are working as expected - write blueprint code/scripts for automation etc.

is there anything else that can be done?

Testing URL (regex) validation with RSpec

A Rails 4.2 application has several models with attributes containing URLs. URL validation is done at the model with validates :website_url, format: { with: /\A(https?|ftp):\/\/(-\.)?([^\s\/?\.#-]+\.?)+(\/[^\s]*)?\z/i }.

I need to test the URL validation with RSpec 3.5. It is important to ensure that some well-known XSS patterns do not pass validation and that the most commonly used URL patterns pass validation.

Ideally, I'd like to avoid adding one test for each valid and invalid URL I'm testing so the rspec -fd output is not polluted. However, that would probably require creating two tests (one for valid URLs and other for invalid URLs) and adding multiple expectations to each test (one expectation per URL), which does not seem like good idea.

The best solution I've come up with so far are the following shared examples. Can you think of a better way to test URL validation thoroughly?

RSpec.shared_examples "url validation" do |attribute|
  INVALID_URLS = [
      "invalidurl",
      "inval.lid/urlexample",
      "javascript:dangerousJs()//http://www.validurl.com",
      # Literal array is required for \n to be parsed
      "http://www.validurl.com\n<script>dangerousJs();</script>"
  ]

  VALID_URLS = [
      "http://validurl.com",
      "http://ift.tt/2ufhCLR"
  ]

  INVALID_URLS.each do |url|
    it "is invalid with #{url} in #{attribute}" do
      object = FactoryGirl.build(factory_name(subject), attribute => url)
      object.valid?
      expect(object.errors[attribute]).to include("is invalid")
    end
  end

  VALID_URLS.each do |url|
    it "is valid with #{url} in #{attribute}" do
      object = FactoryGirl.build(factory_name(subject), attribute => url)
      object.valid?
      expect(object).to be_valid
    end
  end
end

Within the model specs:

include_examples "url validation", :website_url

Antlr4 on Windows7 - Parse Tree Inspector doesn't pop up when TestRig is called with -gui switch

I am running it from cmd with administrative privilege CLASSPATH is set

"-tokens" and "-tree" work fine but "-gui" fails.

Please help.

How to print the content of a variable in python testing within a virtual environment on command line

I have a question pertaining to testing within a virtual environment on command line. I am checking if the following python code:

import requests
headers = {"X-Access-Token": "your_api_token"}
url = "https://api.apps.com"
result = requests.get(url, headers=headers)

will give the following response:

{
  "api_version": "0.0.9",
  "store_name": "Test store",
  "store_url": "test.myshopify.com"
}

I'm testing this by typing in the following into my virtual environment on command line (also, just to note, I'm on a preprod server):

    (env) SAS@preprod:~$ python
    import requests
    headers = {"X-Access-Token": "abc123"}
    url = "https://api.apps.com"
    result = requests.get(url, headers=headers)
    print result

When I type print result, however, I get:

    <Response [200]>

What am I doing wrong?

Is it possible to build instrumentation tests for both debug and release in Android Studio using Gradle

Currently I can only build either debug or release tests in Android Studio(2.3.2) and Gradle(3.3) using testBuildType "debug"(by default, not required in build.gradle)/testBuildType "release". Is it possible to build tests for both debug and release builds? If so how?

Running asynchronous tests on Phantom and node

I'm using phantom module on node and want to test UI on different resolutions. Basically I've achieved what I wanted with this piece of code:

const phantom = require('phantom');

const   lgResolutionSuite = require('./suites/lg'),
        mdResolutionSuite = require('./suites/md'),
        xlResolutionSuite = require('./suites/xl');


let testSuites = [lgResolutionSuite, mdResolutionSuite, xlResolutionSuite];

(async function() {

    let instance = await phantom.create(),
        page = await instance.createPage();

        page.property('onConsoleMessage', function(msg, lineNum, sourceId) {
            console.log(msg);
        });

    for (let testSuite of testSuites) {
        if (!testSuite.viewportSize) {
            throw new Error('Set up viewportsize for suite.')
        }
        if (!testSuite.suite) {
            throw new Error('Set up suite to run.')
        }

        await page.open('./tests/css-test.html')
            .then(status => {
                page.property('viewportSize', testSuite.viewportSize);
                return page.evaluateJavaScript("" + testSuite.suite);
            })
            .then(() => {
                return page.evaluate(function() {
                    mocha.run();
                });
            });
    }


    await instance.exit();
}());

But as I'm learning JS I would like to rewrite it to something more elegant. Right now I'm trying this code:

const phantom = require('phantom');

const   lgResolutionSuite = require('./suites/lg'),
        mdResolutionSuite = require('./suites/md'),
        xlResolutionSuite = require('./suites/xl');


let testSuites = [lgResolutionSuite, mdResolutionSuite, xlResolutionSuite];

(async function() {
    let instance,
        phantomPage;

    let tests = await phantom.create()
            .then(inst => {
                instance = inst;
                return instance.createPage();
            })
            .then(page => {
                phantomPage = page;
                phantomPage.property('onConsoleMessage', function(msg, lineNum, sourceId) {
                    console.log(msg);
                });
                return phantomPage;
            })
            .then(phantomPage => {
                return phantomPage.open('./tests/css-test.html');
            })
            .then((status) => {
                return testSuites.map((testSuite) => {
                    return phantomPage.property('viewportSize', testSuite.viewportSize)
                        .then(() => {
                            phantomPage.evaluateJavaScript("" + testSuite.suite)
                        })
                        .then(() => {
                            return phantomPage.evaluate(function () {
                                mocha.run();
                            });
                        })
                        .then(() => {
                            return phantomPage.reload();
                        })
                });
            })
            .catch(console.error);


    Promise.all(tests).then((values) =>{
        console.log(values);
    })
        .catch(console.error);

    await instance.exit();
}());

but running it ( node file.js ) throws error:

Error: Error reading from stdin: Error: read ECONNRESET
    at Phantom._rejectAllCommands (node_modules/phantom/lib/phantom.js:361:41)
    at Phantom.kill (node_modules/phantom/lib/phantom.js:341:14)
    at Socket.Phantom.process.stdin.on.e (node_modules/phantom/lib/phantom.js:175:18)
    at emitOne (events.js:115:13)
    at Socket.emit (events.js:210:7)
    at emitErrorNT (internal/streams/destroy.js:62:8)
    at _combinedTickCallback (internal/process/next_tick.js:102:11)
    at process._tickCallback (internal/process/next_tick.js:161:9)

even though the tests array contains promises. Also I would like to reload page on each suite to remove previously injected test code, that's why reload function is used. What am I missing? :)

Knowing best features in svm classification process using sequentials

I created a features vector of more than 100 elements then used multiclass SVM to classify (train,test) my samples. I do want to know which features where the most valuable and contributed the most in a successful identification so tried using sequentialfs and didn't understand the example given in Matlab documentation.

So I'll provide a simpler one bellow and would like to know how to add sequentialfs in it:

%% SVM Multiclass Example 
% SVM is inherently one vs one classification. 
% This is an example of how to implement multiclassification using the 
% one vs all approach. 

TrainingSet=[ 1 10;2 20;3 30;4 40;5 50;6 66;3 30;4.1 42]; 
TestSet=[3 34; 1 14; 2.2 25; 6.2 63]; 
GroupTrain=[1 1 2 2 3 3 2 2]; 
results = multisvm(TrainingSet, GroupTrain, TestSet); 
disp('multi class problem'); 
disp(results);

multisvm code:

numClasses=length(u);
result = zeros(length(TestSet(:,1)),1);
%build models
for k=1:numClasses
    %Vectorized statement that binarizes Group
    %where 1 is the current class and 0 is all other classes
    G1vAll=(GroupTrain==u(k));
    models(k) = svmtrain(TrainingSet,G1vAll);
end
%classify test cases
for j=1:size(TestSet,1)
    for k=1:numClasses
        if(svmclassify(models(k),TestSet(j,:))) 
            break;
        end
    end
    result(j) = k;
end

Is it a good practice to have member functions to assert for tests in c#

While writing tests I found that assert statements to compare objects are generally placed within the tests project and not part of the actual class being checked for equality.

Was wondering if it is good practice to instead have a assert method right within the class itself like the following:

public class Person
{
     public string FirstName {get;set;}
     public string LastName {get;set;}

     public AssertEquals(Person expected)
     {
          Assert.AreEqual(expected.FirstName, FirstName);
          Assert.AreEqual(expected.LastName, LastName);
     }
}

Testing select input change in Angular2/Jasmine

I'm having trouble figuring out how to simulate a change in the value of a dropdown list in my jasmine unit test.

When the dropdown list value is changed, it emits an event.

My current test looks like:

spyOn(component.varChange, 'emit');

let selectEl: DebugElement = fixture.debugElement.query(By.css("#selectBox"));
selectEl.nativeElement.selectedIndex = 0;
selectEl.nativeElement.dispatchEvent(new Event("input"));
fixture.detectChanges();
expect(component.varChange.emit).toHaveBeenCalledWith(testData[0]);

The error I receive is:Expected spy emit to have been called with [ Object({ ... }) ] but it was never called.

NUnit Test override Extension C#

I have the following methods that I cannot change

static int A ( string var )
static int A ( this string var )

Since they are defined as static, in order to test them I thought to create public methods that call them like

public static int ATest ( string var )
{
   return A(var);
}
public static int ATestThis ( this string var )
{
   return A(var);
}

And then test them in that way

namespace test.NUnit
{
    [TestFixture]
    public class myFirstTest
    {
        [Test]
        public void TestOnA() {
            Assert.... // with ATest and ATestThis
        }
    }
}

However I obtained these two errors

Type 'Program' already defines a member called A with the same parameter types
The call is ambiguous between the following methods or properties

Do you have any idea?

Bigcommerce Stencil External Access Url Fails to Load Page for Testing

Whenever I run stencil start in my terminal, I'd say that 90% of the time I try to visit the external URL (something like this - http://ift.tt/2sXoUEH) it gives me alongside my local address (http://localhost:3000) it fails to load the page on my testing devices and all browsers. I've looked and looked, but I can seem to find what is causing the issue. Anybody run into this problem before?

Verify / Ensure test code is only called in tests, not in production code

In one of our projects at work, we have started using Guava's VisibleForTesting to annotate "helper" methods that must only be used for testing (e.g. a Setter to allow for service mocks to be "injected"). In production, the application should use Spring's Autowired to "get" its services.

Is there a way to check if code annotated with VisibleForTesting is only called in test code (static analysis, not if someone tries funny stuff via reflection)?

Angular router, breadcrumbs component testing

I have the breadcrumbs component with the test see below. The test should be covered over 75%. Test is failed because ActivatedRoute.root.children is undefined. Who knows how I can define it? I need at least one children for root

Breadcrumbs.component:

import { Component, OnInit } from '@angular/core';
import {
 Router,
 ActivatedRoute,
 NavigationEnd,
 Params,
 PRIMARY_OUTLET
} from '@angular/router';

import { Breadcrumb } from './breadcrumbs.interface.component';
import * as _ from 'lodash';
import 'rxjs/add/operator/filter';

@Component({
 selector: 'breadcrumbs',
 styleUrls: ['./breadcrumbs.component.scss'],
 templateUrl: './breadcrumbs.component.html'
})

export class BreadcrumbsComponent implements OnInit {
private breadcrumbs: Breadcrumb[];

constructor(
    private router: Router,
    private activatedRoute: ActivatedRoute,
    ) {
}
public ngOnInit() {
    this.breadcrumbs = [];
    this.router.events.filter((event) => event instanceof NavigationEnd)
    .subscribe((event: NavigationEnd) => {
        let id = event.url;
        let root: ActivatedRoute = this.activatedRoute.root;
        this.breadcrumbs = this.getBreadcrumbs(root, '/main', this.breadcrumbs, id);
    });
}

public goTo = (breadcrumb) => this.router.navigateByUrl(breadcrumb.path);

private getBreadcrumbs(
    route: ActivatedRoute,
    url: string = '',
    breadcrumbs: Breadcrumb[] = [],
    id: string
    ): Breadcrumb[] {
    const ROUTE_DATA_BREADCRUMB: string = 'title';
    // get the child routes
    let children: ActivatedRoute[] = route.children;

    // return if there are no more children
    if (children.length === 0) {
        return breadcrumbs;
    }

    // iterate over each children
    for (let child of children) {
        // verify primary route
        if (child.outlet !== PRIMARY_OUTLET) {
            continue;
        }

        // verify the custom data property "breadcrumb" is specified on the route
        if (!child.snapshot.data.hasOwnProperty(ROUTE_DATA_BREADCRUMB)) {
            return this.getBreadcrumbs(child, url, breadcrumbs, id);
        }
        // get the route's URL segment
        let routeURL: string = child.snapshot.url.map((segment) => segment.path).join('/');

        // append route URL to URL
        url += `/${routeURL}`;
        let title = child.snapshot.data[ROUTE_DATA_BREADCRUMB]
        || child.snapshot.params[ROUTE_DATA_BREADCRUMB];
        let isNew = child.snapshot.data['new'];
        // add breadcrumb
        let breadcrumb: Breadcrumb = {
            title,
            params: child.snapshot.params,
            url,
            id
        };
        if (isNew) {
            breadcrumbs = [breadcrumb];
        } else {
            let index = _.findLastIndex(breadcrumbs, { id });
            if (index > -1) {
                breadcrumbs.splice(index + 1);
            } else {
                breadcrumbs.push(breadcrumb);
            }
        }

        // recursive
        return this.getBreadcrumbs(child, url, breadcrumbs, id);
    }

    // we should never get here, but just in case
    return breadcrumbs;
}
}

Breadcrumbs.component.spec:

import {
 TestBed,
 ComponentFixture,
 async,
 inject
} from '@angular/core/testing';
import { Component } from '@angular/core';
import { Router, ActivatedRoute, NavigationEnd } from '@angular/router';
import { RouterTestingModule } from '@angular/router/testing';
import { NoopAnimationsModule } from '@angular/platform-browser/animations';
import { CommonModule } from '@angular/common';

import { BreadcrumbsComponent } from './breadcrumbs.component';
import { RouterLinkStubDirective, ActivatedRouteMock, MockComponent } from '../../testing';
import { Observable } from 'rxjs/Observable';
import { of } from 'rxjs/observable/of';

class RouterStub {
public navigate = jasmine.createSpy('navigate');
public ne = new NavigationEnd(0,
    '/main/sps',
    '/main/sps');
public events = new Observable((observer) => {
    observer.next(this.ne);
    observer.complete();
});
public navigateByUrl(url: string) { location.hash = url; }
}

let ROUTES = [
{
    path: 'main',
    component: MockComponent,
    children: [
        {
            path: 'link1',
            component: MockComponent
        },
        {
            path: 'link2',
            component: MockComponent
        },
    ]
},
{
    path: '',
    component: MockComponent,
},
];

describe('Component: BreadcrumbsComponent', () => {
let comp: BreadcrumbsComponent;
let fixture: ComponentFixture<BreadcrumbsComponent>;
let routerStub = new RouterStub();
let mockActivatedRoute = new ActivatedRouteMock();
let router: Router;
beforeEach(async(() => {
    TestBed.configureTestingModule({
        declarations: [
            BreadcrumbsComponent,
            RouterLinkStubDirective,
            MockComponent
        ],
        providers: [
            { provide: Router, useValue: routerStub },
            { provide: ActivatedRoute, useValue: mockActivatedRoute },
        ],
        imports: [
            NoopAnimationsModule,
            CommonModule,
            RouterTestingModule.withRoutes(ROUTES)
        ]
    })
        .compileComponents()
        .then(() => {
            fixture = TestBed.createComponent(BreadcrumbsComponent);
            comp = fixture.componentInstance;
            fixture.detectChanges();
        });
}));
it('should be init', () => {
    expect(comp).toBeTruthy();
});
});

part of the log:

Component: BreadcrumbsComponent ✖ should be init Chrome 58.0.3029 (Mac OS X 10.12.3) Component: BreadcrumbsComponent should be init FAILED TypeError: Cannot read property 'children' of undefined at BreadcrumbsComponent.getBreadcrumbs (webpack:///src/app/components/breadcrumbs/breadcrumbs.component.ts:33:29 <- config/spec-bundle.js:122693:43) at SafeSubscriber._next (webpack:///src/app/components/breadcrumbs/breadcrumbs.component.ts:25:38 <- config/spec-bundle.js:122692:1206)

enter image description here

Xcode don't start Quick tests

I'm developing pod and using tests with Quick pod for it.

When I'm using instance from my pod Xcode don't see any tests.

Here is screenshot:

enter image description here

But it's appearing without instance:

enter image description here

Testing class that extends abstract class

I need to write a test for a class that extends an abstract class. Problem starts when I need to test a method that has super reference. So the code looks something like this:

public abstract class AbstractClass{
    someMethod();
}

public class MyClass extends AbstractClass {
    methodToTest(){
        //somecode
        super.someMethod;
    }
}

Any idea how should I get around it?

extract p-value from oneway test [duplicate]

This question already has an answer here:

I'm actually trying to extract the p-value from an oneway test which is an S4 type object. I tried every code i found but in this test p-value gives a formula.

This is what i get when i call my test:

    > str(One1)
Formal class 'ScalarIndependenceTest' [package "coin"] with 7 slots
  ..@ parameter   : chr "mu"
  ..@ nullvalue   : num 0
  ..@ distribution:Formal class 'ApproxNullDistribution' [package "coin"] with 10 slots
  .. .. ..@ seed          : int [1:626] 403 38 1805567395 1288999595 1796387833 -1210691225 -851843786 571713292 970608248 -1588939275 ...
  .. .. ..@ q             :function (p)  
  .. .. ..@ d             :function (x)  
  .. .. ..@ support       :function (raw = FALSE)  
  .. .. ..@ parameters    : list()
  .. .. ..@ pvalue        :function (q)  
  .. .. ..@ midpvalue     :function (q)  
  .. .. ..@ pvalueinterval:function (q)  
  .. .. ..@ p             :function (q)  
  .. .. ..@ name          : chr "Monte Carlo Distribution"
  ..@ statistic   :Formal class 'ScalarIndependenceTestStatistic' [package "coin"] with 15 slots
  .. .. ..@ alternative                : chr "two.sided"
  .. .. ..@ paired                     : logi FALSE
  .. .. ..@ teststatistic              : Named num -6.66
  .. .. .. ..- attr(*, "names")= chr "bas"
  .. .. ..@ standardizedlinearstatistic: Named num -6.66
  .. .. .. ..- attr(*, "names")= chr "bas"
  .. .. ..@ linearstatistic            : num 56.8
  .. .. ..@ expectation                : Named num 136
  .. .. .. ..- attr(*, "names")= chr "bas"
  .. .. ..@ covariance                 :Formal class 'Variance' [package "coin"] with 1 slot
  .. .. .. .. ..@ variance: Named num 141
  .. .. .. .. .. ..- attr(*, "names")= chr "bas"
  .. .. ..@ xtrans                     : num [1:972, 1] 1 1 1 0 0 0 1 1 1 0 ...
  .. .. .. ..- attr(*, "dimnames")=List of 2
  .. .. .. .. ..$ : chr [1:972] "1" "2" "3" "4" ...
  .. .. .. .. ..$ : chr "bas"
  .. .. .. ..- attr(*, "assign")= int 1
  .. .. ..@ ytrans                     : num [1:972, 1] 0 0 0 0.638 0 ...
  .. .. .. ..- attr(*, "assign")= int 1
  .. .. .. ..- attr(*, "dimnames")=List of 2
  .. .. .. .. ..$ : NULL
  .. .. .. .. ..$ : chr ""
  .. .. ..@ xtrafo                     :function (data, numeric_trafo = id_trafo, factor_trafo = f_trafo, ordered_trafo = of_trafo, surv_trafo = logrank_trafo, 
    var_trafo = NULL, block = NULL)  
  .. .. ..@ ytrafo                     :function (data, numeric_trafo = id_trafo, factor_trafo = f_trafo, ordered_trafo = of_trafo, surv_trafo = logrank_trafo, 
    var_trafo = NULL, block = NULL)  
  .. .. ..@ x                          :'data.frame':   972 obs. of  1 variable:
  .. .. .. ..$ factor(vert): Factor w/ 2 levels "bas","haut": 1 1 1 2 2 2 1 1 1 2 ...
  .. .. .. ..- attr(*, "terms")=Classes 'terms', 'formula'  language ~factor(vert)
  .. .. .. .. .. ..- attr(*, "variables")= language list(factor(vert))
  .. .. .. .. .. ..- attr(*, "factors")= int [1, 1] 1
  .. .. .. .. .. .. ..- attr(*, "dimnames")=List of 2
  .. .. .. .. .. .. .. ..$ : chr "factor(vert)"
  .. .. .. .. .. .. .. ..$ : chr "factor(vert)"
  .. .. .. .. .. ..- attr(*, "term.labels")= chr "factor(vert)"
  .. .. .. .. .. ..- attr(*, "order")= int 1
  .. .. .. .. .. ..- attr(*, "intercept")= int 1
  .. .. .. .. .. ..- attr(*, "response")= int 0
  .. .. .. .. .. ..- attr(*, ".Environment")=<environment: R_GlobalEnv> 
  .. .. .. .. .. ..- attr(*, "predvars")= language list(factor(vert))
  .. .. .. .. .. ..- attr(*, "dataClasses")= Named chr "factor"
  .. .. .. .. .. .. ..- attr(*, "names")= chr "factor(vert)"
  .. .. ..@ y                          :'data.frame':   972 obs. of  1 variable:
  .. .. .. ..$ duree: num [1:972] 0 0 0 0.638 0 ...
  .. .. .. ..- attr(*, "terms")=Classes 'terms', 'formula'  language ~duree
  .. .. .. .. .. ..- attr(*, "variables")= language list(duree)
  .. .. .. .. .. ..- attr(*, "factors")= int [1, 1] 1
  .. .. .. .. .. .. ..- attr(*, "dimnames")=List of 2
  .. .. .. .. .. .. .. ..$ : chr "duree"
  .. .. .. .. .. .. .. ..$ : chr "duree"
  .. .. .. .. .. ..- attr(*, "term.labels")= chr "duree"
  .. .. .. .. .. ..- attr(*, "order")= int 1
  .. .. .. .. .. ..- attr(*, "intercept")= int 1
  .. .. .. .. .. ..- attr(*, "response")= int 0
  .. .. .. .. .. ..- attr(*, ".Environment")=<environment: R_GlobalEnv> 
  .. .. .. .. .. ..- attr(*, "predvars")= language list(duree)
  .. .. .. .. .. ..- attr(*, "dataClasses")= Named chr "numeric"
  .. .. .. .. .. .. ..- attr(*, "names")= chr "duree"
  .. .. ..@ block                      : Factor w/ 1 level "0": 1 1 1 1 1 1 1 1 1 1 ...
  .. .. ..@ weights                    : num [1:972] 1 1 1 1 1 1 1 1 1 1 ...
  ..@ estimates   : list()
  ..@ method      : chr "Two-Sample Fisher-Pitman Permutation Test"
  ..@ call        : language independence_test.IndependenceProblem(object = <S4 object of class structure("IndependenceProblem", package = "coin")>,      teststat = "scalar", distribution = function (object)  ...

And when I call One1@distribution@pvalue i got a formula...

Thanks for your help!

Java TEST database connection

This is a test question post - please do not answer

public int addCourse (Course course) throws SQLException
{
    query = "INSERT INTO course (title) VALUES (?);";
    ps = DBase.getInstance().getConnection().prepareStatement(query, 1);
    ps.setString(1, course.title);
    ps.execute();
    rs = ps.getGeneratedKeys();
    if (!rs.next())
        return -1;
    return course.course_id = rs.getInt(1);
}

Bash to test regex-match, seems to always return OK even not match

I'm under bash and tried to match a short string with a pattern:

$[[ "ab" =~ "^a*" ]]
$echo $?
0

OK, as I expected, but then I changed into "^b*", I expect this time, it doesn't match:

$[[ "ab" =~ "^b*" ]]
$echo $?
0

Strangely the return value is still "OK". Why is that, where did I get wrong? Thanks.

lundi 26 juin 2017

JKS keys with dredd

Is there a way to tell dredd to use specific JKS keys during his tests?

I want to do some automatic tests with dredd on a local https server which accept, and i have the two JKS keys recognized by the server...

It gives error as unable to find element when script is excuted with TestNG

It gives error as unable to find element when below script is excuted with TestNG. Same code is executed successfully when run without TestNG

@Test(priority=0)
  public void verifyHomepageTitle() {

      System.out.println("Launching firefox browser"); 
      System.setProperty("webdriver.gecko.driver", driverPath);
      driver = new FirefoxDriver();
      driver.get(baseUrl);
      String expectedTitle = "Login";
      String actualTitle = driver.getTitle();
      Assert.assertEquals(actualTitle, expectedTitle);
     // driver.close();
  }



 @Test(priority=1)
  public void login(){

        driver.findElement(By.id("LoginForm_email")).sendKeys("bits.qa1@gmail.com");
        driver.findElement(By.id("LoginForm_password")).sendKeys("benchmark");
        driver.findElement(By.id("LoginForm_rememberMe")).click();
        driver.findElement(By.xpath("//button[contains(.,'Sign in')]")).click();
  }

@Test(priority=2)
  public void updateEntity() throws InterruptedException
  {

        String innerText= driver.findElement(By.xpath("//table/tbody/tr/td")).getText();
        System.out.println(innerText);
        //driver.manage().timeouts().implicitlyWait(10,TimeUnit.SECONDS);
        Thread.sleep(10000);
        driver.findElement(By.xpath("//table/tbody/tr/td")).click();
  }

calculate execution times of async function over multiple calls

If the do work function was performing some operation, say, picking an item from queue and performing some operation. How would I get the execution times for doWork function over time. I want to find out how much time doWork takes to complete at an average.

Sample Code

function doWork () {
  return Promise.resolve({first: 'Tony', last: 'Starks'})
}

async function wrapper () {
  console.time('wrapper')
  const response = await doWork()
  console.timeEnd('wrapper')
  return response
}

Promise.all([
  wrapper(),
  wrapper(),
  wrapper()
]).then((result) => console.info(result))

Output

wrapper: 0.388ms
[ { first: 'Tony', last: 'Starks' },
  { first: 'Tony', last: 'Starks' },
  { first: 'Tony', last: 'Starks' } ]
(node:2749) Warning: No such label 'wrapper' for console.timeEnd()
(node:2749) Warning: No such label 'wrapper' for console.timeEnd()

How to do load testing for a web application built using JSON restful web service at server side?

this question is not about which tools to use to do load testing ...

It is a more general question about what should be tested.

background:

years ago, most enterprise web application used to be designed that front end and backend are messed together, and any kinds of business logic is done at server side. while front end's responsibility is only to show the UI.

when load testing such application, whether it is Oracle EBS or SAP's things, or application built using framework like Struts.

What we do is to:
1. design a user test case. (like login, search , put sth to shopping car, then check log)
2. write testing script to archive such test case (underline, it's pure http call to server side )
3. using LT tool to generate lots of virtual user together to run such test. (it will just behave like that there are lots of user do the same business logic together)

While now , the whole thing changed after after front end and backend totally separated.

Lot of business logic can be put in front end, and server end only has separate restful web service...

What i found is that we can't do load testing like before.

Just because user's test case can't be mapped to http communication. As some task is totally done in front end...

BUT in such new world, how to do performance testing? Just performance test the JSON restful web service? How about the static resource transferred from service side to browser ?

I need suggestions from these who done lot of performance/load testing for such JSON restful web service application..

Does appending something in URL create a feedback page?

testing something. will delete it soon. I want to know that if i just the URL with something appended in the end, a new end point is created for us automatically to provide feedback.

Testing apollo graphql container with separate data server

I'm working on an app that has two repos; one for a graphql data layer and another for an apollo/react webapp. I'd like to test my grapqhl containers in the apollo/react app using 'mockServer' from the 'graphql-tools' package, which requires a graphql schema. I'm thinking of creating another repo just for a json representation of the graphql schema, pulling that down before tests, then feeding that into 'mockServer'.

Has anyone else had experience with a pattern like this? Most of the apollo testing examples I've seen seem to have the data layer and the ui layer in the same repo.

The main problem I see with this is that the tests will become dependent on the schema repo being updated/available. I think this is probably better than redefining the schema in the apollo/react app just for testing, and think it'd be nice to have tests for containers fail if the graphql schema changes, but I'm curious about other people's opinions/experiences.

Purchase.getsku() returning null?

All I want to do is check if the user's purchase matches the SKU of "donate_one_dollar". However I get the error:

    java.lang.NullPointerException: Attempt to invoke virtual method 'java.lang.String util.Purchase.getSku()' on a null object reference

at com.curlybrace.ruchir.rescuer.MainActivity$13.onIabPurchaseFinished(MainActivity.java:1113)
at util.IabHelper.launchPurchaseFlow(IabHelper.java:472)
at util.IabHelper.launchPurchaseFlow(IabHelper.java:398)
at util.IabHelper.launchPurchaseFlow(IabHelper.java:392)

Basically, mPurchaseFinishedListener seems to be returning a null purchase object. How can I fix this?


IabHelper.OnIabPurchaseFinishedListener mPurchaseFinishedListener
            = new IabHelper.OnIabPurchaseFinishedListener() {
        public void onIabPurchaseFinished(IabResult result, Purchase purchase) {

            Log.v("myTag", "Purchase finished. SKU = " +purchase.getSku()); //NOT WORKING

                if (purchase.getSku().equals("donate_one_dollar")) { **//DOESNT WORK**

                    Toast.makeText(MainActivity.this, "One dollar donated!", Toast.LENGTH_SHORT).show();
                }
        }
    };

I am using Beta testing with a testing account to test my purchases.