dimanche 30 juin 2019

How to use .should with the "closeTo" assertion in Cypress?

In this SO post, the following code snippet was posted.

cy.window().then(($window) => {
  expect($window.scrollY).to.be.closeTo(400, 100);
});

However, I would like to use the "should" syntax as shown below.

// This code works
cy.window().its('scrollY').should('equal', 400);

How can I use "should" and "closeTo" together in Cypress (the following does not work)?

// This code doesn't work
cy.window().its('scrollY').should('closeTo', 400, 100);

The documentation doesn't seem to show an example for the above case.

How to test a component using react-redux hooks?

I have a simple Todo component that utilizes react-redux hooks that I'm testing using enzyme but I'm getting either an error or an empty object with a shallow render as noted below.

What is the correct way to test components using hooks from react-redux?

Todos.js

const Todos = () => {
  const { todos } = useSelector(state => state);

  return (
    <ul>
      {todos.map(todo => (
        <li key={todo.id}>{todo.title}</li>
      ))}
    </ul>
  );
};

Todos.test.js v1

...

it('renders without crashing', () => {
  const wrapper = shallow(<Todos />);
  expect(wrapper).toMatchSnapshot();
});

it('should render a ul', () => {
  const wrapper = shallow(<Todos />);
  expect(wrapper.find('ul').length).toBe(1);
});

v1 Error:

...
Invariant Violation: could not find react-redux context value; 
please ensure the component is wrapped in a <Provider>
...

Todos.test.js v2

...
// imported Provider from react-redux 

it('renders without crashing', () => {
  const wrapper = shallow(
    <Provider store={store}>
      <Todos />
    </Provider>,
  );
  expect(wrapper).toMatchSnapshot();
});

it('should render a ul', () => {
  const wrapper = shallow(<Provider store={store}><Todos /></Provider>);
  expect(wrapper.find('ul').length).toBe(1);
});

v2 tests fail because the snapshot is an empty object

console.log('Snapshot:', wrapper);
// ShallowWrapper {}

Thanks in advance for your help!

Magic Mock function inside a constructor

I am writing a test, where I create a constructor.
This constructor has one parameter, which I pass a Mock inside.
But except for this parameter, there is another function inside of the parameter and I'm not sure how to get the constructor to see it as a Mock.
Here's an easy example:

Class MyClass():
    def __init__(self, var):
        self._var = var
        self._func = func()
        # Other stuff, I actually care about and can easily check **

Now it's easy to handle the var if I pass it as a parameter in the test:

def test_trying_mock(self):
    var = MagicMock()
    object = MyClass(var)

And the var line is handled. How can I make the constructor see func as mock and make it skip it to the part I actually wanna run and check?

katalon studio missed report after upgrade from 5 to 6

how to save report into special directory from cmd? I have this cmd arguments: -summaryReport -reportFolder="C:\testagent\test_repo". basic report plugin installed. selected html report. but after execution I have only this 3 files; execution.properties execution.uuid execution0.log in C:\testagent\test_repo in the katalon 5 I have htmls reports after execution :(

iam beginner in android testing with Mockito

i want to test this method using mockito and unit testing

`public class CurrentIP {

public static String getIPAddress() {
    try {
        List<NetworkInterface> interfaces = Collections.list(NetworkInterface.getNetworkInterfaces());
        for (NetworkInterface intf : interfaces) {
            List<InetAddress> addrs = Collections.list(intf.getInetAddresses());
            for (InetAddress addr : addrs) {
                if (!addr.isLoopbackAddress()) {
                    String sAddr = addr.getHostAddress();
                    boolean isIPv4 = sAddr.indexOf(':') < 0;
                    if (isIPv4)
                        return sAddr;
                }
            }
        }
    } catch (Exception ignored) {
    } // for now eat exceptions
    return "";
}

}`

samedi 29 juin 2019

Working example or tutorial for setting up webpack, coffeescript, mocha?

I have a coffeescript project that started out as a mish-mash of gulp, rollup and babel.

I have converted it to build under webpack using coffee-loader and that works really nicely.

However ... how do I get my coffee/mocha tests to run smoothly under webpack? The mocha-loader docs are very terse. I imagine it is quite straightforward to get *.test.coffee to load, compile coffee -> es6 and run as a test suite.

But it is sort of non-obvious - to me anyway.

Help appreciated.

AdMob test ads not working on released version of Android app

I have an issue with AdMob in my Android apps. I added my device id as test device id and test ads are working well in my apps inside the debug version but when I create signed release APK and upload it to the Google play and install my app from Google play I noticed that the test ads have been replaced with live ads.There is no test ad label at the top of the ad. So anybody know the reason of this issue and how to fix it? Thanks & Best Regards

Test Jasmine returns undefined for response

I have written a node JS application where I am passing in my mongoDB url. Via the process.env, I am able to parse my mongoDB url into my js. The reason for this is I do not wish to hardcode my mongoDB ip. It should be able to swap out at any point of time for another mongoDB database hosted in a separate infrastructure environment. i.e MONGODB_URL='mongodb://localhost:27017/user' npm start

I am writing a test case for this using jasmine-node, however, when I try to do MONGODB_URL='mongodb://localhost:27017/user' npm test, it doesnt seem that any test cases are running and it returns an undefined for my response.

This is my github repo for my code https://github.com/leeadh/node-jenkins-app-example in the development branch.

Does anyone know what is the cause or how I should have written it?

Thank you

var request = require("request");
var base_url = "http://localhost:4000/";
var server = require("../app.js");


// sample unit test begin
describe("Test Base Page", function() {
  describe("GET /", function() {
    it("returns status code 200", function(done) {
      request.get("http://localhost:4000/", function(error, response, body) {
        console.log("example")
        console.log(response)
        expect(response.statusCode).toBe(200);
        done();
        server.close()
      });
    });

  });
});


Identify a string that tests all 64 glyphs in Base64 encoding

I am writing a JUnit test to see if the java.util.Base64 function works the same as the sun.misc.BASE64 version. I am almost 100% certain they are identical enough for my purposes, but I'm working in a large enough codebase in a large enough company that if I make even a small change in our code I should write a test for it. So I've made a test that encodes a given string using sun.misc.BASE64 method, and decodes using java.util.Base64, and then compares the original and final strings.

Unsurprisingly, for every string I've attempted, the original string and the final string have been identical. It would seem that java.util.Base64 works the same as sun.misc.BASE64.

However for the sake of writing a better test I'd like to use a string (or anything) that when encoded, is encoded in all 64 glyphs. I don't know enough about base64 encoding to write such a string. Does anybody know of something that I could encode in sun.misc.BASE64 and decode in java.util.Base64 that will ensure that the two methods are treating all 64 glyphs the same?

vendredi 28 juin 2019

How to run script as pytest test

Suppose I have a test expressed as a simple script with assert-statements (see background for why), e.g

import foo
assert foo(3) == 4

How would I include this script in my pytest test suite -- in a nice way?

I have tried two working but less-than-nice approaches:

One approach is to name the script like a test, but this makes the whole pytest discovery fail when the test fails.

My current approach is to import the script from within a test function:

def test_notebooks():
    notebook_folder = Path(__file__).parent / 'notebooks'
    for notebook in notebook_folder.glob('*.py'):
        import_module(f'{notebook_folder.name}.{notebook.stem}')

This actually seems to work, but test failures have a long and winding stack trace:

__________________________________________________ test_notebooks ___________________________________________________

    def test_notebooks():
        notebook_folder = Path(__file__).parent / 'notebooks'
        for notebook in notebook_folder.glob('*.py'):
>           import_module(f'{notebook_folder.name}.{notebook.stem}')

test_notebooks.py:7:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
envs\anaconda\lib\importlib\__init__.py:127: in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
<frozen importlib._bootstrap>:1006: in _gcd_import
... (9 lines removed)...
<frozen importlib._bootstrap>:219: in _call_with_frames_removed
    ???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

>   assert False
E   AssertionError

notebooks\notebook_2.py:1: AssertionError

Background

The reason I have test in script files is that they are really Jupyter notebooks saved as .py-files with markup by the excellent jupytext plugin.

These notebooks are converted to html for documentation, can be used interactively for learning the system, and serve as cheap functional tests.

How can I test an onClick line of a React component using Jest?

I am learning React and I am currently trying Jest/testing. I am starting tests on a small project and I want to get 100% code coverage. Here's what I have.

component:

import React from 'react';

function Square(props) {
    const className = props.isWinningSquare ?
        "square winning-square" :
        "square";
    return (
        <button
            className={className}
            onClick={() => props.onClick()}
        >
            {props.value}
        </button>
    );
}

export default Square

tests:

import React from 'react';
import Square from '../square';
import {create} from 'react-test-renderer';

describe('Square Simple Snapshot Test', () => {
    test('Testing square', () => {
        let tree = create(<Square />);
        expect(tree.toJSON()).toMatchSnapshot();
    })
})

describe('Square className is affected by isWinningSquare prop', () => {
    test('props.isWinningSquare is false, className should be "square"', () =>{
        let tree = create(<Square isWinningSquare={false} />);

        expect(tree.root.findByType('button').props.className).toEqual('square');
    }),
    test('props.isWinningSquare is true, className should be "square winning-square"', () =>{
        let tree = create(<Square isWinningSquare={true} />);

        expect(tree.root.findByType('button').props.className).toEqual('square winning-square');
    })

})

The line indicated as "uncovered" is

onClick={() => props.onClick()}

What is the best way to test this line? Any recommendations?

How to add data provider values in testng extent-report version 2.41.2(relevantcodes)

I have used Dataprovider to pass my test values and I want these values to get displayed in my testNG extent report. I have seven test cases and these test cases run on multiple test values passed in DataProvider. By clicking on the method name in extent report, I want the report to display that on which values the test has been executed.`

Here is my dataprovider class:

public class MyDataProvider {

@DataProvider
public Object[][] realTimeConfiguration() {
    return new Object[][] {

             new Object[] {"safari", "safari5.1","macoslion" },
            new Object[] { "chrome", "chrome76", "win10", "1280x1024" },
             new Object[] {"chrome", "chrome75","win10","1280x1024" },
             new Object[] {"chrome", "chrome74","win10","1280x1024" },
             new Object[] {"chrome", "chrome73","win10","1280x1024" },
             new Object[] {"chrome", "chrome72","win10","1280x1024" },

             new Object[] {"firefox", "firefox68","win10","1280x1024" },
             new Object[] {"firefox", "firefox67","win10","1280x1024" },
             new Object[] {"firefox", "firefox66","win10","1280x1024" },
             new Object[] {"firefox", "firefox65","win10","1280x1024" },
             new Object[] {"firefox", "firefox64","win10","1280x1024" },

    };

}

}

Thanks in advance !!

Instrumenting Mocha tests with custom performance monitoring

I've got a set of Mocha tests that have a nasty habit of timing out on CI non-deterministically.

A lot of these tests are asynchronous - they make heavy use of async/await and promises - and it seems like certain async calls are being problematic.

However, it can be hard to diagnose issues without manually instrumenting the tests, inserting scores of lines like console.log('ln 88: ', Date.now()) in a bid to watch where tests are being held up, and get granular timing on which parts of our tests are being slow.

I had a thought that I could write some automated instrumentation that could identify async expressions and wrap them in a way that lets me log their performance.

I reasoned I could switch this on whenever the tests were causing me trouble. I could also use it in other projects to do detailed profiling of my tests in CI (which don't always have the same performance characteristics as when running locally).

I have some ideas as to how I can do the instrumentation - either via AST transforms or even crude string replacement - what I'm not sure is where / when.

Is my only option to write a Babel plugin, and pass Babel over my test source files? Or does Mocha provide any kind of hook in which I can transform test functions? Are there any other options?

How do i parameterize parameters i am passing in my API request & execute through Karate?

I am testing API's for my application & each API has multiple parameters to be passed, ex. below: https://abc.xyz.com/***.svc/restful/GetSummary?FromDate=2019/06/28&ToDate=2019/06/28&CompAreaId=15&RegId=4

Each parameter in the request has multiple values (within a defined set of values), so if i want to parameterize each parameter with all the values it could possibly have, how can i create a scenario that will help me achieve this?

I would appreciate any hints/views. Thanks

I have been passing parameters as shown in the code below, but unable to pull of the scenario above mentioned, it would be time consuming & repetitive to pass parameters in a seperate scenario each time.

Scenario: Verify if GetContext API returns data with params

Given path 'GetContext'
And param FromDate = '2019/06/27'
And param ToDate = '2019/06/27'
And param CompAreaId = 20
And param RegId = 4
When method get
Then status 200
* def res = response
* print 'response:', response

What To Unit Test

I'm a bit confused how much I should dedicate on Unit Tests.

Say I have a simple function like:

appendRepeats(StringBuilder strB, char c, int repeats)

[This function will append char c repeats number of times to strB. e.g.:

strB = "hello"
c = "h"
repeats = 5
// result
strB = "hellohhhhh"

]

For unit testing this function, I feel there's already so many possibilities:

  • AppendRepeats_ZeroRepeats_DontAppend
  • AppendRepeats_NegativeRepeats_DontAppend
  • AppendRepeats_PositiveRepeats_Append
  • AppendRepeats_NullStrBZeroRepeats_DontAppend
  • AppendRepeats_NullStrBNegativeRepeats_DontAppend
  • AppendRepeats_NullStrBPositiveRepeats_Append
  • AppendRepeats_EmptyStrBZeroRepeats_DontAppend
  • AppendRepeats_EmptyStrBNegativeRepeats_DontAppend
  • AppendRepeats_EmptyStrBPositiveRepeats_Append
  • etc. etc. strB can be null or empty or have value. c can be null or have value repeats can be negative or positive or zero

That seems already 3 * 2 * 3 = 18 test methods. Could be a lot more on other functions if those functions also need to test for special characters, Integer.MIN_VALUE, Integer.MAX_VALUE, etc. etc. What should be my line of stopping? Should I assume for the purpose of my own program: strB can only be empty or have value c has value repeats can only be empty or positive

Sorry for the bother. Just genuinely confused how paranoid I should go with unit testing in general. Should I stay within the bounds of my assumptions or is that bad practice and should I have a method for each potential case in which case, the number of unit test methods would scale exponentially quite quick.

Unit testing angular with RXJS observable inside NgOninit

I want to confirm the value of description is set inside my angular components form. Using a Jest unit test.

I've used an RXJS pipe on my videoMeta$ observable (inside my angular component) so that each time the value is passed down the pipe my form data is updated.

ngOnInit() {
        this.videoMeta$.pipe(map(videoMeta => {
            this.videoMetaForm = this.fb.group({
                title: [videoMeta.title, [Validators.required, Validators.maxLength(100)]],
                description: [''],
                privacy: ['', [Validators.required]],
                category: ['', [Validators.required]],
                tags: [videoMeta.tags.join(', '), Validators.maxLength(200)],
            });
        }), takeUntil(this.destroy$)).subscribe();
    }

The jest unit test is setup like so

describe('GIVEN video meta THEN description value', () => {
        // Given
        beforeEach(() => {
            component.videoMeta = getMockSingleVideoMeta();
        });

        // When
        beforeEach(() => {
            fixture.detectChanges();
        });

        // Then
        it('description field has value', async () => {
           expect(component.videoMetaForm.controls['description'].value).toEqual('In hac habitasse platea dictumst. Etiam faucibus cursus urna. Ut tellus.');
        });

    });

I can get my unit test to work as expected by simply delaying the test for x amount of time so the observable has a chance to resolve.

it('description field has value', async () => {
            setTimeout(() => {
                                expect(component.videoMetaForm.controls['description'].value).toEqual('In hac habitasse platea dictumst. Etiam faucibus cursus urna. Ut tellus.');
            }, 3000);
        });

This is clearly a hacky solution. Is there a best practice for testing values setup in ngOninit that take time to resolve, without needing to change my component implementation?

Nested autowired field stays null in JUNIT test

I have a factory pattern implemented and wanted to test this. Except the fields in this factory method are not getting autowired. It seems like the @Autowired attribute in the factory class is staying null. I can't use @SpringBootTest annotation because of the blockchain configuration file that gets loaded then

Below is the code of the service factory, the parserfactory gets autowired correctly in the test. The problems is with the autowired fields of the parserfactory

@Service
@Slf4j
public class ParserFactory {

    @Autowired
    OsirisParser osirisParser;

    public Parser getParser(String system) {
        if (system == null) {
            return null;
        }
        if (system.equalsIgnoreCase("Progress")) {
            return ProgressCreateService();
        }
        if (system.equalsIgnoreCase("Osiris")) {
            log.debug("Osiris parsen creëren");
            return OsirisCreateService();
        }
        return null;

    }

    public OsirisParser OsirisCreateService() {
        return osirisParser;
    }

    public OsirisParser ProgressCreateService() {
        return new OsirisParser("ProgressParser");
    }

The test

@RunWith(SpringRunner.class)
public class FactoryTest {

    @Mock
    ParserFactory serviceCallFactory;

    @Test
    public void testCreateOsirisServiceSuccesFull() {
        Parser serv = serviceCallFactory.getParser("Osiris");
        assertThat(serv, instanceOf(OsirisParser.class));
    }

    @Test
    public void testCreateProgressServiceSuccesFull()  {
        Parser serv = serviceCallFactory.getParser("Progress");
        assertThat(serv, instanceOf(ProgressParser.class));
    }

    @Test
    public void testCreateProgressServiceUnSuccessFull() {
        Parser serv = serviceCallFactory.getParser("Progrddess");
        assertThat(serv, is(not(instanceOf(OsirisParser.class))));
    }

    @Test
    public void testCreateWhenStringIsNotCorrect() {
        Parser serv = serviceCallFactory.getParser("0$iri$");
        assertThat(serv, is(nullValue()));
    }

    @Test
    public void testCreateWhenStringIsNull() {
        Parser serv = serviceCallFactory.getParser("");
        assertThat(serv,  is(nullValue()));
    }
}

PHPUnit test a static function located in a trait

In a piece of legacy code I was tasked to test a static function in a trait like that:

namespace App\Model\SomeLogic;

trait WhyDecidedToUseTrait
{
   public static function aMethodThatDoesSomeFancyStuff()
   {
     //Method Logic
   }
}

And from this piece of documentation using the getMockForTrait method. But in my case making a dummy object in order to test a static function where object instants are useless to begin with has no value.

Also testing the method in objects that use this trait seems pretty much time consuming, also tdoing a larger scale refactoring is time consuming as well.

So how I can test the trait in order to gradually refactor any class that uses it?

MSSQL How to always rollback the entire transaction

I have a somewhat unsual need for my transaction handling.

I am working on a mutation test framework for MSSQL, for this I need to run my tests inside a transaction, so that the database always is in the state it started when the tests finish.

However I have the problem that users can code inside the test procedures and may call rollback transaction that may or may not be inside a (nested) transaction (savepoint).

high level it looks like this

start transaction
initialize test
run test with user code
   may or may not contain:
      - start tran
      - start tran savename
      - commit tran
      - commit tran savename
      - rollback tran
      - rollback tran savename
output testresults
rollback transaction

Is there a way to make sure I can at last always roll back to the intial state? I have to take in account that users can call stored procedures/triggers that maybe nested and can all contain transaction statements. With all my solutions the moment a user uses rollback tran in their test code they escape the transaction and not everything will be cleaned

What I want is that if a user calls rollback only their part of the transaction is rolled back and my transaction that I start before the initialization of the test is still intact. Maybe I am overthinking this a lot? Other solutions to this problem are welcome as well. Feel free to ask for clarifications.

Xcode UI Testing plan

I would like to implement UI Testing for my app, but I couldn't find anywhere some guides on how to structure the tests, group them or which is the best way to proceed writing the tests - a big flow in a test, or multiple tests for smaller flows. If you have some resources regarding these topics, any help would be appreciated.

Thank you!

How to split dataset into training and test set in MATLAB with latin hypercube sampling

I have a dataset of roughly 200 points right now and want to create a surrogate model from it in MATLAB. Herefore I apply the DTU Toolbox for the creation of the surrogate model.

My questions now is how to I prepare the dataset properly. Meaning I want to roughly split it into ~70% for the training and the other 30% for the verification of my model. My professor mentioned I could apply latin hypercube sampling in MATLAB therefore.

I found the function lhsdesign() in Matlab but as I get it it just creates a latin hypercube sample of random points with a lower and upper boundary. These points don´t represent points of my dataset tough. So is it possible to create this distribution of this specific dataset in Matlab ?

Or do I have to do a second step and try to match the random points with my dataset ?

How can I make a checklist for Denial-Of-Service Attack? And what's the testcases of DOS attack

I don't know how to make a check list and test cases of Denial of service attack.. Can anybody guide me :")

set aria-checked property of radio to true after fireEvent.click in react-testing

I want to set the aria-checked property of radiobutton to true after fireEvent.click. I do not want to do it by setAttribute but onclick.

I have tried the following code to test my component.

Radiobutton.js

function Radiobutton(props) {
  const { label, onClick, onKeyPress, isChecked } = props;

  return (
    <div
      className="radiobutton"
      role="radio"
      onClick={onClick}
      onKeyDown={onKeyPress}
      aria-checked={isChecked}
      tabIndex={0}
      value={label}
    >
      <span className="radiobutton__label">{label}</span>
    </div>
  );
} 

Radiobutton.test.jsx

test("radiobuttons must use click", () => {
  const handleChange = jest.fn();
  const { container } = render(
    <Radiobutton onClick={handleChange} isChecked={false} label="Dummy Radio" />
  );
  const radiobtn = getByRole(container, "radio");
  fireEvent.click(radiobtn);
  expect(handleChange).toHaveBeenCalledTimes(1);
  expect(radiobtn.getAttribute("aria-checked")).toBe("true");

});

I am able to call the handleChange function but unable to change the value of aria-checked. I'm getting the following error.

    expect(received).toBe(expected) // Object.is equality

    Expected: "true"
    Received: "false"

      30 |   expect(handleChange).toHaveBeenCalledTimes(1);
    > 31 |   expect(radiobtn.getAttribute("aria-checked")).toBe("true");
         |                                                 ^
      32 |   console.log(prettyDOM(radiobtn));
      33 | });

      at Object.toBe (src/__tests__/radiobutton.test.jsx:31:49)

I'm new to testing. Any help would be much appreciated. Thanks in advance.

jeudi 27 juin 2019

Is exporting a function just for testing considered a bad practice?

I have this question especially after reading the official redux documentation on testing React components:

https://github.com/reduxjs/redux/blob/master/docs/recipes/WritingTests.md

In order to be able to test the App component itself without having to deal with the decorator, we recommend you to also export the undecorated component

Even the famous https://www.reactboilerplate.com/ exports named unconnected components just to be able to test them without mocking a store.

But isn't it considered bad to export something just so it makes things easier to test?

There might be cases when a developer makes the wrong import and introduces a bug just because there are two things exported from a file.

So, the question essentially is:

Can we make changes to the actual code to make testing easier?

Although this question is specific to React, would be great to know if any other languages or frameworks have similar problems and how they are dealt with.

How to test class with return new object of this class

Here is code:

MyClass myMethod(String string){
    String x = string.substring...// body is not important here
    return new MyClass(x);
}

This code works as intended. I have created some test (based on various assertions), all passed (shows in green, but I'm wondering if I really test it). Unfortunately, all of these tests not shown as "coverage" in JaCoCo report.

I have no idea what I doing wrong. I would like to "coverage" this method in JaCoCo.

Using random numbers vs hardcoded values in unit tests

When I write tests, I like to use random numbers to calculate things.

e.g.

 func init() {
   rand.Seed(time.Now().UnixNano())
 }

  func TestXYZ(t *testing.T) {
     amount := rand.Intn(100)
     cnt := 1 + rand.Intn(10)
     for i := 0; i < cnt; i++ {
       doSmth(amount)
     }
     //more stuff
  }

which of course has the disadvantage that

expected := calcExpected(amount, cnt)

in that the expected value for the test needs to be calculated from the random values.

If have received criticism for this approach:

  • It makes the test unnecessarily complex
  • Less reproduceable due to randomness

I think though that without randomness, I could actually:

  • Make up my results, e.g. the test only works for a specific value. Randomness proves my test is "robust"
  • Catch more edge cases (debatable as edge cases are usually specific, e.g. 0,1,-1)

Is it really that bad to use random numbers?

(I realize this is a bit of an opinion question, but I am very much interested in people's point of views, don't mind downvotes).

If a function depends on another function, how do you emplace that inner-function with mock data for unit testing?

So I have something like so:

class MyClass {
    private myVar;

    public MyClass(someValue) {
        // Performs some operation to determine myVar
    }

    public double calculateThing() {
        // Returns some arithmetic operation on myVar
    }

    public double calculateOtherThing() {
        newVar = calculateThing();
        // Returns some arithmetic operation on newVar
    }
}

calculateThingUnitTest() {
    MyClass class = new MyClass(someValue);

    Assert::AreEqual(someConstant, class.CalculateThing());
}

calculateOtherThingUnitTest() {
    MyClass class = new MyClass(someValue);

    Assert::AreEqual(someOtherConstant, class.CalculateOtherThing());
}

It is clear that calculateThingUnitTest is a proper unit test because it initializes a class, gives it some literal-defined independent value in the constructor, and the assertion it makes is based only on calculateThing(), so it tests one "unit" of the application.

However, calculateOtherThingUnitTest calls calculateOtherThing which then makes a call to calculateThing to determine if the result is correct. So, if calculateThing ends up failing, calculateOtherThing will fail as well.

In order to prevent this, you would want to put some mock value in place of calculateThing, and you could achieve this by inserting a new member variable into MyClass and making calculateThing load into that member variable. Then before calling calculateOtherThing you could just insert a literal value into this member variable and you would have a proper unit test.

Adding a new member variable and exposing it to the public seems extraordinarily excessive, however. Is there a better way to achieve this?

Are browser tests needed?

Why would someone need to do browser tests ( using tools like laravel dusk ), when they already doing "Feature tests" and "Unit tests"?

Is browser tests needed for the app? and why?

Proxy between database and Cucumber selenium tests

I have tests writen for my web app in Cucumber with Selenium. In these tests I need to connect with database (postgres) which is on some server. This is a problem in local development process because establishing connection takes time so every time I write some changes and want to run tests I need to wait more than 10 sec to connect to the database.

So I thought about some proxy that can be run once and then can serve connection to my local project.

Is this have any sense? Maybe there are tools for that? If not, maybe You know some blogs or tutorial for something like that? Prefered technologies: java, grails, nodejs.

Thanks in advance!

How to put INTENT_CONFIDENCE asserter as a global variable in Botium

My question is about how to set the INTENT_CONFIDENCE as a global asserter instead of put it all over the convos.

I tried it two ways in botium.json configuration file, but both are not working:

Thanks in advance for your help

within the "ASSERTERS" capability, I tried:

  "INTENT_CONFIDENCE": 100 -- I have done the same but with "100"/ "100 %"

and

  "ref": "INTENT_CONFIDENCE", 
  "val": "100" 

the error looks like

AppData\Roaming\npm\node_modules\botium-cli\node_modules\yargs\yargs.js:1133 else throw err ^

Error: Failed to fetch package botium-asserter-undefined - { Error: Cannot find module 'botium-asserter-undefined'

How to demonstrate or qualitatively test a Java class before packaging?

My current project contains an interface and an implementation of a RandomTextGenerator type (such as might be used in an adventure game to generate original character or place names). I am using Maven to compile and package the project and JUnit 5 for unit testing. I'd like to structure this project properly in order to open-source it (if only just to learn the proper setup).

My JUnit tests do a lot, but they can't qualitatively test the big question: do the randomly-generated names come out sounding good?

How should I test this? Options I'm considering:

  • Add a new class with a static void main() that generates and prints a bunch of these names to System.out.
  • Add a JUnit test that does the above and then ends with assertTrue(true).
  • Build a new project outside of this package, which imports the package and does the above.

I'd like to know what's the generally-accepted best practice, given my intent to open-source this project. Would consumers of my package want to see these tests, or not, and where?

Is there a way to force that a method is tested?

I'd like to mark unit tested methods as tested in Java code, so that a compiler error occurs in case of a missing testing method. Is there a way to do that? I couldn't find a satisfying solution.

How to pass a data from some json file to Gherkin feature file

I want to parameterize my gherkin feature file steps by taking data from some other JSON file. Any suggestions for this. I searched almost everywhere but couldn't find answers.

I am aware of scenario where examples being used with multiple values for a variable using scenario outline in gherkin feature file but not looking for that.

Currrently I am using like this and below values in quotes are being passed to step definitions

    Scenario: Buy last coffee
        Given There is "Starbucks" coffee
        And I added "Sugarless" syrup

Expected: I want to get data of variables from JSON file or any other file also and pass these data values to step definition functions. is it possible?

gherkin feature file:

    Scenario: Buy last coffee
        Given There is "${data.coffeeshop}" coffee
        And I added "${data.sugarType}" syrup

data.json:

    {
        "coffeeshop": "starbucks",
        "sugarType": "Sugarless",

    }

Embedded Software Testing

I would like to know about integration testing for Embedded software .

Does Embedded software integration testing include micro controller or simulated environment or pc environment ? how to decide approach for integration testing as black box or white box . what is the standard input for testing .

thanks

SyntaxError: Unexpected token < in JSON at position 668 (package.json) [on hold]

I'm pretty new to it, but I'm trying to automate testing in a Shopify webstore with Cypress. The thing is I am unable to do so as, when I try and run the tests, the Cypress browser emulator pops this error before anything:

Error: SyntaxError: Unexpect token < in JSON at position 668 while parsing json file ...\package.json

What makes this error? And any ideas on how to fix it?

Feel free to ask me if more information is needed, just want to fix this issue so I can work on the tests.

How to write unit tests for asynchronous function with a callback argument

I'm writing common tests for my kotlin multiplatform library which implements the API business logic using ktor client library.

I have a function which takes a callback as an argument, use coroutines to make the request to the API, and then execute the callback.

Here is a simplified version of the function from my UserApi class I want to test

fun <T : Any> fetch(
    requestBuilder: HttpRequestBuilder,
    deserializer: DeserializationStrategy<T>,
    callback: (Either<ErrorMessage, T>) -> Unit)
{
    GlobalScope.launch(dispatcherIO) {
        val result: Either<ErrorMessage, T> =
        try {
            val returnObject: T = Json.parse(
                deserializer, 
                HttpClient().post(requestBuilder)
            )
            Either.Right(returnObject)
        } catch (e: Exception) {
            Either.Left(ErrorMessage(e.message))
        }
        withContext(dispatcherMain) { callback(result) }
    }
}

I would like to write a unit test like that:

@Test
fun requestOK() {
    runTest { //runTest returns a platform specific runBlocking
        val UserApi().fetch(request, User.serializer()) {
            it.fold(
                { failure -> fail("must return success" },
                { user -> assertEquals(expectedUser, user) }
            )
        }
    }    
}

'No JUnit tests found' when using Mockito in Spring Tool Suite

When I try to run the tests that contains Mocks from Mockito on Spring Tool Suite 4 (and Eclipse 2019-06) using 'Run As > JUnit Test', I get the following error: "No JUnit test found".

I have no problem running the tests on 'Run As > Gradle Test'.

If the class tests have no @Mock annotation it also works without a problem, when the annotation is added I begin to get this error when trying to run as JUnit Test.

Any idea about what the problem might be?

Pass custom API key in command line with Cypress

I am testing with Cypress, connecting to an external API that requires a key, however I am conscious about not pushing a private key publicly on GitHub.

Is there a way in which I can pass my API key through the command line or in another safe way at the time of running the Cypress app?

"run-cypress": "cypress run --browser chrome --reporter-options configFile=cypress.json"

Could I append something such as

apiKey=abcdefg...

after configFile...? And how would I access this in the code?

How can I get a text of one column of the table using testcafe and then i assert its eql to "something"

I have page with "search field" and "search button" and a table with 5 columns. I want to make automation test with testcafe + javascript as following:

1: Type in "search field" - DONE

2: Click "search button" - DONE

3: Get the TEXT of all elements in the second column and assert it that its eql to "something"

I made it with Java + Selenium webDriver. It was done with but i`am not that good with JavaScript and still cant investigate how to do it.

mercredi 26 juin 2019

Test and Main folder order in Eclipse

I am working on a Maven project in Eclipse. For some reason, the test and main folders are not shown in their usual way: test is shown above main, instead of below. Weird eclipse order image

This is pretty annoying of course and I´d like to change it but can't figure out how. I also suspect that this may be the cause of some other issues I have with Eclipse in this project (tests not running), but I can't be sure.

Does anyone know a possible reason for this behaviour and can point me towards a solution? Many thanks!

Is there a library for Fractions and Decimal numbers in Dart?

I have been trying to learn Dart for the past while and am really liking it so far. Currently I'm hitting some issues where tests are failing because I'm comparing doubles for equality (I know this is almost always a bad idea), but I'm surprised a modern language like Dart doesn't handle this in a smarter way.

I would like to know if there are any libraries for Dart like the decimal library for Python.

Or, if that's not the case, what the best practices are for writing unit tests for functions that return a double.

I have tried two workarounds so far:

Difference less than Epsilon
bool fractionsAreEqual(double a, double b, double epsilon) {
    return abs(a - b) < epsilon;
}

Compare Fixed Strings
bool fractionsAreEqual(double a, double b, int precision) {
    return a.toStringAsFixed(precision) == b.toStringAsFixed(precision);
}

But these seem very hack-y and not ideal.

I would like a library or suggestion on how I can do something like:

double func() {
    // Some calculations that result in a double, d
    return d;
}

double expected = 33 / 100; // Or any other fraction

expect(func(), equals(expected));

Reproducible random tests

I've recently run into the following piece of documentation. A cursory search revealed just a gist. So I decided to bring these ideas to life.

The first one is far from being used as is. The second one is too wordy and doesn't work in case of failure (failureException is supposed to be a class, the corresponding method can be commented out).

Before I describe my solutions, let me ask the questions. 1. Am I doing anything wrong? This sort of task must have been simplified ages ago. At least in my book. 2. Is there any way to improve my solutions?

The first one (branch factory-boy-state) stores random generators' state in files, that can later be used to pass state to the tests:

class MyTestRunner(DiscoverRunner):
    def setup_test_environment(self):
        self.handle_rnd_state(faker.generator.random, 'faker.state',
            'FAKER_RANDOM_STATE')
        self.handle_rnd_state(factory.random.randgen, 'factory.state',
            'FACTORY_RANDOM_STATE')
        super().setup_test_environment()

    def handle_rnd_state(self, rnd, fname, env_var):
        state = self.retrieve_rnd_state(fname, env_var)
        if state:
            rnd.setstate(state)
        else:
            self.store_rnd_state(fname, rnd.getstate())

    def retrieve_rnd_state(self, fname, env_var):
        state = os.environ.get(env_var)
        return pickle.loads(b64decode(state.encode('ascii'))) if state else None

    def store_rnd_state(self, fname, state):
        encoded_state = b64encode(pickle.dumps(state))
        with open(fname, 'w') as f:
            f.write(encoded_state.decode('ascii'))

Usage:

$ ./manage.py test --testrunner p1.tests.MyTestRunner
$ FAKER_RANDOM_STATE=`cat faker.state` FACTORY_RANDOM_STATE=`cat factory.state` ./manage.py test --testrunner p1.tests.MyTestRunner

Another one (branch factory-boy-seed) seeds the random generators, but the seed can be overriden via an environment variable:

class MyTestRunner(DiscoverRunner):
    def setup_test_environment(self):
        seed = os.environ.get('FACTORY_SEED')
        if seed:
            seed = seed.encode('ascii')
        if not seed:
            seed = b64encode(os.urandom(16))
        print('-- seed: %s' % seed.decode('ascii'))
        factory.random.reseed_random(seed)
        super().setup_test_environment()

Usage:

$ ./manage.py test --testrunner p1.tests.MyTestRunner
$ FACTORY_SEED=IPnHCxdWehAxz6ctbNvyPQ== ./manage.py test --testrunner p1.tests.MyTestRunner

Test runner can be put into the settings.

Testing office-js addins against old versions of excel

I'm developing an office-js addin for corporate client which insists on using legacy versions of Excel (1708 or 1808).

But when I install excel from office 365, I get version 1906.

In the old days, with software, there were installers you could find in a directory that had version stamps on them, and you could create a VM, and just install the version you wanted. But the fancier installers appear to download stuff automatically and always (at least default to) the latest versions.

How do I install a legacy version of Excel on a VM for testing?

How to Integrate Testing a Controller, which inside that controller, methods gets token from another site

I'm setting up an API which it gets authenticated to another server through JWT. one of Controller's methods I want to test, gets the token from an external site. how can I test this method?

I made a Test server and tried to mimic the website action which providing tokens.i can access this Test Server through Test Methods but I can't get to it from the actual controller.

here is my Test Server setup method MockedController is the Controller supposed to provide toke and is working fine and I can get tokens from test units. AuthController is the Controller supposed to test.

var server = new TestServer(
                new WebHostBuilder()
                .UseEnvironment(TestConstants.EnvironmentName)
                .UseStartup<Startup>()
                .UseUrls(TestConstants.MockAddress)
                .ConfigureTestServices(config =>
                {
                    config.AddOptions();
                    config.AddMvc()
                       .SetCompatibilityVersion(Microsoft.AspNetCore.Mvc.CompatibilityVersion.Version_2_2)
                        .AddApplicationPart(typeof(AuthController).Assembly)
                        .AddApplicationPart(typeof(MockedTokenController).Assembly)
                        ;

                    config.AddSingleton<MockedTokenController>();
                    config.BuildServiceProvider();
                }));
            server.BaseAddress = new Uri(TestConstants.MockAddress);
            server.CreateHandler();
            _client = server.CreateClient();
            _client.BaseAddress = new Uri(TestConstants.MockAddress);// I tried with and without this line

Here is the test method which is failing

 var request = new HttpRequestMessage(HttpMethod.Get, "/Auth/Login");
            var response = await _client.SendAsync(request);
            var contents = await response.Content.ReadAsStringAsync();
            Assert.Equal(HttpStatusCode.OK, response.StatusCode);

Here is the AuthController Login Method code

[HttpGet("Login")]
public async Task<IActionResult> LoginAsync(){
         var token = await _authService.GetAuthenticationTokenAsync();
         return Ok(token);
}

here is the AuthService code which is called from AuthController

public async Task<string> GetAuthenticationTokenAsync(){
      HttpClient client = new HttpClient();
       var response = await client.SendAsync(request, 
       HttpCompletionOption.ResponseContentRead);
       response.EnsureSuccessStatusCode();
       var token = await response.Content.ReadAsStringAsync();
       return token;
}

extra information. test method for mocked controller works fine. it seems the problem is in second step usage of the mocked controller. the first step works fine. I mean I can access the mocked controller from Test unit at first step but when I try to access it through the Main controller(AuthController), I can't get to it

What's the difference between TestBed.get and new Service(...dependencies)

The angular guide demonstrates two different ways of testing, one by calling new Service() and providing the dependencies to the constructor directly, and the second using dependency injection by calling TestBed.get(Service).

Both of these seem functionally identical to me, except when I call TestBed.get() consecutively it does not call the constructor after the first call.

The angular documentation also mentions that TestBed.get() is deprecated (even though the guide still references it!) and that I should use Type or InjectionToken instead, but I do not see how either of these classes could replace TestBed.get().

Cant create my own facade and test it with Mocking

I am trying to create tests for a class that does some API calls to another program, and i dont want to send petitions when im testing my code.

I used the "extend Facade" on my class, and i get an error when i try to use the "shouldReceive".

 MyClass::shouldReceive('update')
            ->once()
            ->with([
                'id' => 1
            ])
            ->andReturn(200);

The error is: Cannot redeclare Mockery_3_MyClass::shouldReceive()

How to properly test a function that fill a combobox from a database

I am beginner in Python, and I am in front of a smal problem. I am using PyQt5 for my IHM. It is a form. I need to fill some combobox with datas in a database.

For the moment, it's in a class name Ui_dialog. I have two questions.

  1. How can I properly make an unit test on these function? I do not really know how to start my tests sometimes...
  2. Does I need to use the ressources of my project ( database, xmls... ) or I need to make some specials datas for my test in order cover all possible cases ?
        def fill_combobox_from_database(self,name_of_table,name_of_field,combobox):
            connexion = sqlite3.connect(os.path.dirname(os.path.abspath(__file__))+"\\"+Ui_dialog.NAME_DB)
            cursor = connexion.cursor()
            try:
                request = "select {0} from {1}".format(name_of_field,name_of_table)
                results = cursor.execute(request)
                for row in results:
                    combobox.addItem(row[0])
            except Exception as e:
                print(e)

Thanks for reading me and for helping me!

How to use javascript to run automated tests in browser using selenium+CUCUMBER

Well, I am kinda new to this. First of all, my main goal is to execute a simple cucumber example which tests automatically something extremely simple as well.By doing this I will try to get the idea of how should i do other kind of autmated test. So, I wrote some scenarios, and I want to test them somehow on a site(e.g. google.com). The site is written in JS and therefore I need to write JavaScript code to "connect" the scenarios with the language.

I google searched things like: "How to automatically test a site using cucumber" "How to automatically run scenarios with selenium-javascript" and so on...

Any ideas? No hatefull comments please :/ Thanks in advance!

DL.

Testing: How to write tests for very specific code

I need to write tests for a code-base and I had some confusion regarding how to go about this. I search the internet + stack overflow however I couldn't find a suitable answer to my question.

Suppose you have code that does something very specific. For example, the function needs to be provided with a root_directory (e.g. Database) which is required to check if the String provided already exists in the database. The code would break if the correct root_directory isn't provided, and if the String isn't in the correct format it would break as well. For the sake of context, look at the simple code below. We provide the code with the root_dir and the string to search for.

root_dir = /Documents/MyDatabase/
specific_string_to_find = "A01_55678"
def get_all_db_files(root_dir, specific_string_to_find):
    searchFor(specific_string_to_find, root_dir)

This is all quite straightforward. However, if I have to write a test for such a function, would I provide my test with the correct root_dir where the database exists? Would I provide it with the correct format of the String? If I don't provide the correct information, the code will break. And if I do provide it, I feel like it renders the test void since I already know exactly what's going in. Any guidance would be much appreciated. I apologize if the question is inappropriate, however, I require some guidance for this specific use case.

Spring-Boot-Tests not running in eclipse

I have a project with a load of spring-boot-tests that run fine when I do a mvn build, but always fail almost immediately when i run them from eclipse.

The tests from a different project, that are very similarily structured, run fine in Eclipse.

I´m not sure if it´s relevant, but for some reason src/test/java and src/test/resource are located before src/main/java and src/main/resource in the package explorer view for that project and I can't find out why or how to change it.

The exceptions are usually very plain:

org.springframework.beans.factory.BeanDefinitionStoreException: Failed to parse configuration class nested exception is org.springframework.context.annotation.ConflictingBeanDefinitionException: Annotation-specified bean name 'swaggerConfig' for bean class conflicts with existing, non-compatible bean definition of same name and class

Or something similar.

Any ideas of why Eclipse is messing with me?

Edit: using Version: 2018-12 (4.10.0)

How to run tests in JavascriptCore?

I'm developing a js library that should be consumable by react-native projects as well. The javascript engine used by react-native is JavaScriptCore, which has its differences compared to node or engines used in browsers. There are quite a few things missing, or behaving differently: URL, Buffer, Blob.

It would be nice to catch these kind of issues without causing regressions for library consumers.

Is there a test runner which supports such scenario?

Mock Bing Maps API with Jest in React.js

In my Jest test I need to generate a Bing Maps Push Pin like this:

it('...', () => {
  var e = new window.Microsoft.Maps.Pushpin({ "latitude": 56.000, "longitude": 46.000 }, {});
  /* do stuff */
  expect(e.getColor()).toEqual('#ffd633');
})

But while I'm running the test I get the error:

TypeError: Cannot read property 'Maps' of undefined

Does someone know how to mock that Bing Maps API Microsoft interfaces with Jest in React?

Search all test executions in Squash TM returns an empty table

I have some test executions in my campaign workspace (passed, ready, failure, etc) but, when searching for all executions of a project X, no records are reported in the search table?

SQUASH TM VERSION 1.19.0.RELEASE https://www.squashtest.org/en

Please, how could we fix this issue?

Test stage in gitlab-ci not generating artifacts

While using Gitlab-CI I encountered a problem. Running the tests sometimes would return an error, but I couldn't debug it from the console, since Laravel logs everything into a logs folder. I thought I might save those as artifacts.

I tried setting up the tests in the build stage, that would solve it. But that seems wrong to me.

composer:

  stage: build

  script:
      - composer install --prefer-dist --no-ansi --no-interaction --no-progress --no-scripts
      - cp .env.example .env
      - php artisan key:generate

  artifacts:

    expire_in: 1 hour

    paths:
      - vendor/
      - .env

  cache:
    key: ${CI_COMMIT_REF_SLUG}-composer

    paths:
      - vendor/

phpunit:

  stage: test

  dependencies:
    - composer

  script:
    - php artisan migrate:fresh --seed
    - phpunit --coverage-text --colors=never

  artifacts:

    expire_in: 1 day

    paths:
      - storage/logs/

How to test XGB model with a single new sentence?

We trained a XGB model with two columns. Summarized has texts. Security_Flag has 0 and 1. Test and training works well. Now we want to add a new sentence (not included in the original file). It still works as long as we use only known words from the original file. But if we use a complete new word, we get an error message.

It all works - only the last code line throws the error

Please advise Thanks

We tried to enter the new sentence in different ways.

import matplotlib.pyplot as plt
from xgboost import plot_tree
import xgboost as xgb
import pandas as pd
import numpy as np
import pickle
import string
import nltk
import csv
import os
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.datasets import dump_svmlight_file
from sklearn.metrics import precision_score
from sklearn.externals import joblib
from sklearn.metrics import confusion_matrix
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction import DictVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize

def pp(text):

    # tokenize into words

    # remove stopwords
    stop = stopwords.words('german')
    tokens = [word for sent in nltk.sent_tokenize(text) for word in nltk.word_tokenize(sent)]
    tokens = [token for token in tokens if token not in stop]
    # remove words less than three letters
    tokens = [word for word in tokens if len(word) >= 3]
    # lower capitalization
    tokens = [word.lower() for word in tokens]
    # lemmatize
    lmtzr = nltk.WordNetLemmatizer()
    tokens = [lmtzr.lemmatize(word) for word in tokens]
    preprocessed_text= ' '.join(tokens)
    return preprocessed_text

df = pd.read_csv("file03.csv", sep=",", usecols=["Security_Flag","Summary"])
y = df["Security_Flag"]
# from dataframe to array for train test splitting
y = y.values

Z = []
for row in df['Summary']:
    l = pp(row)
    Z.append(l)

vectorizer = CountVectorizer()
X = vectorizer.fit_transform(Z)

X = X.toarray()
#X = pd.DataFrame(data=X[0:,0:])

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=41)

dtrain = xgb.DMatrix(X_train, label=y_train)
dtest = xgb.DMatrix(X_test, label=y_test)

param = {
'max_depth': 3, # the maximum depth of each tree
'eta': 0.3, # the training step for each iteration
'silent': 1, # logging mode - quiet
'objective': 'multi:softprob', # error evaluation for multiclass training
# 'objective': 'binary:logistic', # error evaluation for multiclass training
'num_class': 2} # the number of classes that exist in this datset

num_round = 20 # the number of training iterations
bst = xgb.train(param, dtrain, num_round)

preds = bst.predict(dtest)
best_preds = np.asarray([np.argmax(line) for line in preds])
stest = xgb.DMatrix([X_test[0]])
spred = bst.predict(stest)

print(confusion_matrix(y_test, best_preds))

while True:
    ts = input("Enter a sentence: ")
    ts = pp(ts)
    Z.append(ts)
    Y = vectorizer.fit_transform(Z)
    Y = Y.toarray()
    test = xgb.DMatrix([Y[-1]])
    spred = bst.predict(test)

"The expected result would be a one or zero. The output is an error message." training data did not have the following fields: f1354, f1355, f1352, f1353

Can't run NUnit tests for x86 projects

I can't run NUnit tests, after I changed the 'Platform target:' to x86.

I'm using a .NET Core 2.2 project in VisualStudio 2019 and I'm running my tests with ReSharper. The NUnit Version is 3.11.0. The following example works for x64, but does not for x86:

[TestFixture]
public class Tests
{

    [Test]
    public void Test1()
    {
        Assert.Pass();
    }
}

'Test1 Ignored: VsTest test-case is missing. Rebuild the project and try again.' So it seems like that the test is no longer identified as a NUnit Test and that the IDE tries to interpret it as a mstest?

I want to create test suite for testing REST API's with Python

I have several API's which I trigger and test using postman. I want to write python test script for the same to test rest calls.How should I write simple test scripts to test and validate the rest response. Please guide me.

mardi 25 juin 2019

Do I group my test project's packages by the features or as UI and Integration followed by features?

I'm writing a test project that is testing some features using java and selenium. I would like to know if there is some standard in structuring this. My question is whether I should separate packages based on UI and integration tests, followed by features (such as HomePage); or would it be better to structure the project by features first and then add packages for UI and Integration based on the requirement.

Currently, I have only UI tests with sub packages for my page objects that I am testing. I have considered separating tests using @Tag annotations.

It would be helpful if someone could provide some reference or projects that I could refer to improve my design in terms of both readability and design. Please let me know if there is any other information I can provide. Thanks in advance.

How to set runs on TestNG to exclude similar test group names

I am using java and testNG. I have multiple tests running by groups and they are running fine BUT when I have group names with similar string like SearchTests and GeneralSearchTests, test for the second group runs twice because run also when SearchTests is executed.

How could I set the testNG to avoid this?

Error: Cannot find module './Users/mackbookpankaj/Desktop/AutomationJSFramework/node_modules/nightwatch/bin/runner.js' Require stack:

JS Automation Testing Framework environment set up

Converting junit xml result files to simple output

I would like to convert a folder full of JUnit XML files to a simplified console output.

Is this possible with some tool out-of-the-box? If so, how? Any kind of command-line tool would be fine.

Convolution Neural Network application Python

I have a trained CNN where the input is a 5x5 grid, and the output is a single floating point value. The basics of the model is such that the 5x5 grid variables (var1, var2, var3) are centered over the truth value which are scattered throughout an entire dataset, where the entire dataset is typically a 540 x 596 grid. I have named the model 'mymodel.h5', so the weights should be saved from the training. I have used a large amount of training / validation data, while the independent testing data shows good performance (correlation ~ 0.97, MAE ~ 2.3, and mean bias ratio ~ 0.98).

The idea now is to run this 5x5 CNN model on an entire dataset, where the full grid has dimensions 540 x 596. I want to extract every 5x5 grid possible and then run this through the CNN model, and recreate a grid of the results. However, the resultant grid does not seem to look like how I expected the results to look, and am wondering if there is a bug in the code.

 # load in the data
 f1 = netCDF4.Dataset('/path/to/data.netcdf')
 # How big is the grid size?
 gsize = 5
 # Get the values from the netcdf file
 Z1 = f1.variables('VariableName')
 # Get the dimensions of the data
 lenx = Z1.shape[0]
 leny = Z1.shape[1]
 # Pre-allocate the matrix
 allZ = np.zeros([(lenx-gsize) * (leny-gsize), gsize, gsize])
 cnt = 0

 # Now extract the gsize x gsize grids from the data within Z1
 for x in range(gsize, lenx-gsize):
     for y in range(gsize, leny-gsize):
         cnt +=1
         allZ[cnt,:,:] = Z1[x:x+gsize,y:y+gsize]

 # Load model
 model = load_model('/path/to/mymodel.h5')
 sim_predict = model.predict(allZ, verbose = 1)

 # Reshape the sim_predict variable to be the same dimensions as Z1
 reshape_sim = np.reshape(sim_predict,(lenx-gsize, leny-gsize))

allZ should be a matrix that loops through each of the gsize x gsize (in this case, 5 x 5) grids available within the domain, while sim_predict is the output value from each 5 x 5 grid. I have padded the edges by 5 to remain within the domain as well, yet the results look weird. Any thoughts on the structure of the code?

Angular 7 testing with injection tokens

Using: Angular 7 Jasmine 2.8.0 Jasmine Core 2.99.1

My unit test is testing a component using an Injection Token, and breaks.

It is failing with the error Uncaught TypeError: Cannot read property 'app' of undefined thrown

The only reference in the code to app without a suffix is in the configuration file. If I remove the bit with the configuration file from the component the test runs fine. If I Inject the config but don't use it, the test runs fine. Only thing that breaks it is if I use the configuration in the component.

The component I am testing uses some configuration through an Injection Token.

This is my test:

describe('AppSomethingComponent', () => {
    let component: AppSomethingComponent;
    let fixture: ComponentFixture<AppSomethingComponent>;
    const mockDataConfig: IDataConfig = {
        ...
        baseLayers: {
            cartodb_black_sea: {
                ...
                url: ''
            }
        }
    };
    const mockAppConfig: IAppConfig = {
      defaultBaseLayerKey: 'cartodb_black_sea',
      ...
  };

    beforeEach(async(() => {
        TestBed.configureTestingModule({
            declarations: [AppSomethingComponent],
            providers: [
                {
                    provide: APP_CONFIG,
                    useValue: mockAppConfig
                },
                {
                  provide: DATA_CONFIG,
                  useValue: mockDataConfig
                }
            ]
        }).compileComponents();
    }));

    beforeEach(() => {
        fixture = TestBed.createComponent(AppSomethingComponent);
        component = fixture.componentInstance;
        fixture.detectChanges();
    });

    it('should create', () => {
      expect(component).toBeTruthy();
    });
});



This is my component:

export class AppSomethingComponent implements OnInit {
    private map: L.Map;
    constructor(
        @Inject(DATA_CONFIG) public dataConfig: IDataConfig,
        @Inject(APP_CONFIG) public appConfig: IAppConfig,
    ) {
        // linter go away
    }

    public ngOnInit() {
        this.map = L.map('app-map').setView([20, 20], 2);

// This is where I think it is breaking in the test
        L.tileLayer(
            this.dataConfig.baseLayers[this.appConfig.defaultBaseLayerKey].url
        ).addTo(this.map);
    }
}

This is my config

export const APP_DI_CONFIG: IAppConfig = {
    defaultBaseLayerKey: ''
};

export let APP_LOCAL_CONFIG: IADT2Config = (window as any).__appConfig;
export const APP_CONFIG = new InjectionToken<IAppConfig>('app.config');


My guess is the tests are looking for app.config from new InjectionToken<IAppConfig>('app.config');, not finding it, and freaking out. I had to replace some words so forgive me if capitalization is not consistent, it is in the actual app and in my actual tests.

Reasons why fixture.detectChanges() might be causing testing timeouts

Adding fixture.detectChanges() in the beforeEach() method is causing my unit tests for one file to timeout. I've seen other spec.ts files with this statement in there and those work fine. What are potential reasons for this timeout?test result

enter image description here

How to properly test a Flink window function?

Does anyone know how to test windowing functions in Flink? I am using the dependency flink-test-utils_2.11.

My steps are:

  1. Get the StreamExecutionEnvironment
  2. Create objects and add to the invironment
  3. Do a keyBy
  4. add a Session Window
  5. execute an aggregate function
public class AggregateVariantCEVTest extends AbstractTestBase {

   @Test
    public void testAggregateVariantCev() throws Exception  {
       StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
       env.setParallelism(1);

       env.fromElements(objectOne, objectTwo)
               .keyBy(new KeyedByMyCustomKey())
               .window(EventTimeSessionWindows.withGap(Time.seconds(1)))
               .aggregate(new MyAgreggateFunction());


       JobExecutionResult result = env.execute();

       assertEquals(myExpectedResults, result.getAllAccumulatorResults());

   }
}

The problem is that result.getAllAccumulatorResults() size is 0.

Any ideas what I am doing wrong? Thanks in advance!

List of emails to conduct Survey related to microservices and tests

I am a student of the Computer Science course at the Federal University of Juiz de Fora - UFJF, Brazil. I am conducting, along with my advisor, a suvery related architecture and testing in micorservices. But I still do not have a list of contacts for the desired search. I wonder if some of you would volunteer to answer this brief field survey. If so, send your contact emails so that you can submit the survey form. Thank you in advance for your attention.

How to run a pytest test for each item in a list of arguments

Suppose I have a list of HTTP URLs like

endpoints = ["e_1", "e_2", ..., "e_n"]

And I want to run n tests, one for each endpoint. How can I do that?

A simple way to test all the endpoints at once would be

def test_urls():
    for e in endpoints:
        r = get(e)
        assert r.status_code < 400

or something like that. But as you can see, this is one test for all the n endpoints and I would like a little more granularity than that.

I have tried using a fixture like

@fixture
def endpoint():
    for e in endpoints:
        yield e

but, apparently, Pytest is not really fond of "multiple yields" in a fixture and returns a yield_fixture function has more than one 'yield' error

Unit Testing xunit The following constructor parameters did not have matching fixture data

The following constructor parameters did not have matching fixture data for unit testing using moq and xunit.

Already using dependency injection and mock to test the class.

//this is how i register the DI.
services.AddScoped<IWaktuSolatServiceApi, WaktuSolatServiceApi>(); 


 public interface IWaktuSolatServiceApi
 {
    Task<Solat> GetAsyncSet();
 }


// the unit test. 
public class UnitTest1 
{
    Mock<IWaktuSolatServiceApi> waktu;

    public UnitTest1(IWaktuSolatServiceApi waktu)
    {
        this.waktu = new Mock<IWaktuSolatServiceApi>();
    }

    [Fact]
    public async Task ShoudReturn()
    {
        var request = new Solat
        {
            zone = "lala"
        };

        var response = waktu.Setup(x => 
        x.GetAsyncSet()).Returns(Task.FromResult(request));
    }
}

But i got this error The following constructor parameters did not have matching fixture data.

Testing RXJS custom pipes

I have a custom pipe, that is only a collection of multiple pipes. I, however, would like to test the code.

export type AnnotatedResponse<T> = T & {
    awaitingNewValues: boolean;
    failed: boolean;
};

function merge<T>(
    original: T,
    awaitingNewValues: boolean,
    failed: boolean,
): AnnotatedResponse<T> {
    return Object.assign({},
        original,
        { awaitingNewValues, failed },
    );
}

export function annotateAwaiting<I, T extends {}>(
    initialValue: T,
    toAnnotate: (input: I) => Observable<T>,
): (source: Observable<I>) => Observable<AnnotatedResponse<T>> {
    return (source: Observable<I>) => {
        let lastValue: T = initialValue;
        return source.pipe(
            switchMap(input => concat(
                of(merge(lastValue, true, false)),
                toAnnotate(input).pipe(
                    take(1),
                    map(result => {
                        lastValue = result;
                        return merge(result, false, false);
                    }),
                    catchError(() => {
                        return of(merge(lastValue, false, true));
                    }),
                ),
            )),
            finalize(() => {
                //  Free reference to prevent memory leaks
                lastValue = undefined as any;
            }),
        );
    };
}

The test

fdescribe('annotateAwaiting$', () => {
    let subject: Subject<string>;
    let annotated: Observable<AnnotatedResponse<{ letter: string }>>;
    let response: Subject<Error | { letter: string }>;

    beforeEach(() => {
        subject = new Subject<string>();
        response = new Subject();
        annotated = subject.pipe(
            annotateAwaiting(
                { letter: '' },
                letter => response.pipe(
                    take(1),
                    map(val => {
                        if (val instanceof Error) {
                            throw val;
                        } else {
                            return val;
                        }
                    }),
                ),
            ),
            shareReplay(1),
        );
    });

    afterEach(() => {
        subject.complete();
        response.complete();
    });

    it('should emit immediately', () => {
        let i = 0;
        subject.next('a');
        annotated.pipe(
            take(1),
        ).subscribe(obj => {    //  Runs
            expect(obj.awaitingNewValues).toBe(true);
            expect(obj.failed).toBe(false);
            expect(obj.letter).toBe('');
            i++;
        });

        expect(i).toBe(1);

        response.next({ letter: 'a'});
        annotated.pipe(
            take(1),
        ).subscribe(obj => {    //  Doesn't run (neither async)
            expect(obj.awaitingNewValues).toBe(false);
            expect(obj.failed).toBe(false);
            expect(obj.letter).toBe('a');
            i++;
        });

        expect(i).toBe(2);
    });
});

What am I missing? Why it doesn't seem to run at all?

Test suite result dependency in testNG

I have test suite xml file that contains other test suites in the following format:

<suite-files>
    <suite-file path="debug1.xml" />
    <suite-file path="debug2.xml"/>

</suite-files>

Is there a way to add a dependency between the two sub suites? Meaning if debug1.xml fails, then debug2.xml should not be executed at all. TestNG does not provide any dependency on the <suite-files> level. Can this be done, perhaps, through some listener?

press ctrl+enter keyboard in laravel dusk

In laravel dusk, how do I type text and press ctrl+enter keyboard in it? I have to write text and then press ctrl+enter to let other thing happen so text wrote here like this

<body id="tinymce" class="mce-content-body " data-id="any_area" contenteditable="true" spellcheck="false">hi //d&nbsp;<span data-assign-id="85">userName</span>&nbsp; </body> 

How to handle unit test / integration test for a Installation / Setup Application that can be used in multiple platform such as Windows, Linux, Unix

I'm handling a code repository that is responsible to deploy / install / upgrade agent that are running on different platform, windows, unix, linux.

The code base already has some unit test to cover the behavior of important component.

I'm now thinking how I could increase the test coverage and confidence of the setup / upgrade by introducing integration testing, and integrate with travis.

What are the available test framework that I could explore on this type of application?

lundi 24 juin 2019

Android Kotlin Espresso Mockito Test and verify Navigation Controller of a fragment navigates to 2 conditional fragments as a destination

In my app, I am testing navigational components using navigation controller using Kotlin lang, Espresso framework and using Mockito. App has multiple fragments. Few tests are running fine. But not able work with following scenerio

There is fragment supposed name fragment2, fragment2 navigates to fragment3 and fragment4 according to condition what input user provide in fragment2

Just want to understand how it will work. Providing my code below for another test which is running fine for fragment4 to fragment2(where fragment 4 always redirects tofragment 2, i.e single navigation)

  @Test
fun testNavigationFromGameOverToGameFragment() {
    // Create a mock NavController
    val mockNavController = mock(NavController::class.java)

    // Create a graphical FragmentScenario for the TitleScreen
    val gameOver = launchFragmentInContainer<GameOverFragment>(themeResId 
= R.style.AppTheme)

    // Set the NavController property on the fragment
    gameOver.onFragment { fragment ->
        Navigation.setViewNavController(fragment.requireView(), 
mockNavController)
    }

    // Verify that performing a click prompts the correct Navigation 
action

onView(ViewMatchers.withId(R.id.tryAgainButton)).perform(ViewActions.click())

verify(mockNavController).navigate(R.id.action_gameOverFragment_to_gameFragment) }

I want approach for the same

How to mock a connection using nock library?

I have application code that verifies response.connection.remoteAddress isn't a local IP, but when using nock response.connection is just an EventEmitter instance with no remoteAddress property. How do I modify response that nock creates?

Google foobar python: failure on two tests. Why the 3th and the 10th tests fail? What happen if I submit a wrong solution?

I was invited to play Google's foobar challenge. I am currently on Level 2.1 with following question. Being a henchman isn't all drudgery. Occasionally, when Commander Lambda is feeling generous, she'll hand out Lucky LAMBs (Lambda's All-purpose Money Bucks). Henchmen can use Lucky LAMBs to buy things like a second pair of socks, a pillow for their bunks, or even a third daily meal!

However, actually passing out LAMBs isn't easy. Each henchman squad has a strict seniority ranking which must be respected - or else the henchmen will revolt and you'll all get demoted back to minions again!

There are 4 key rules which you must follow in order to avoid a revolt: 1. The most junior henchman (with the least seniority) gets exactly 1 LAMB. (There will always be at least 1 henchman on a team.) 2. A henchman will revolt if the person who ranks immediately above them gets more than double the number of LAMBs they do. 3. A henchman will revolt if the amount of LAMBs given to their next two subordinates combined is more than the number of LAMBs they get. (Note that the two most junior henchmen won't have two subordinates, so this rule doesn't apply to them. The 2nd most junior henchman would require at least as many LAMBs as the most junior henchman.) 4. You can always find more henchmen to pay - the Commander has plenty of employees. If there are enough LAMBs left over such that another henchman could be added as the most senior while obeying the other rules, you must always add and pay that henchman.

Note that you may not be able to hand out all the LAMBs. A single LAMB cannot be subdivided. That is, all henchmen must get a positive integer number of LAMBs.

Write a function called solution(total_lambs), where total_lambs is the integer number of LAMBs in the handout you are trying to divide. It should return an integer which represents the difference between the minimum and maximum number of henchmen who can share the LAMBs (that is, being as generous as possible to those you pay and as stingy as possible, respectively) while still obeying all of the above rules to avoid a revolt. For instance, if you had 10 LAMBs and were as generous as possible, you could only pay 3 henchmen (1, 2, and 4 LAMBs, in order of ascending seniority), whereas if you were as stingy as possible, you could pay 4 henchmen (1, 1, 2, and 3 LAMBs). Therefore, solution(10) should return 4-3 = 1.

To keep things interesting, Commander Lambda varies the sizes of the Lucky LAMB payouts. You can expect total_lambs to always be a positive integer less than 1 billion (10 ^ 9). I tried different solutions for the second problem of google foo.bar competition but 2 test always fail. I don't understand why 2 test, the 3th and the 10th fail. How can I fix it? If are wrong tests case? If I submit a solution which doesn't pass the tests, I will lose the competition?

`def spending(lambs):
    henchmen_num = 1
    prec = 0
    curr = 1
    lambs -= 1
    while lambs > 0:
        if lambs < curr * 2:
            if lambs >= curr + prec:
                henchmen_num += 1
            break
        henchmen_num += 1
        curr, prec = curr * 2, curr
        lambs -= curr

    return henchmen_num


def greedy(lambs):
    henchmen_num = 1
    prec = 0
    curr = 1
    lambs -= 1
    while lambs > 0:
        if lambs < prec + curr:
            break
        henchmen_num += 1
        curr, prec = curr + prec, curr
        lambs -= curr

    return henchmen_num

def solution(total_lambs):
    if total_lambs <= 0 or total_lambs > 10**9:
        return 0
    return greedy(total_lambs) - spending(total_lambs) `

The test 3th and the 10th always failed. Is there a problem with these 2 test cases or my code is not correct?

understanding notation: const { someVar } = otherVar [duplicate]

The more Javascript code I read in real working code the more quirks of notation I can't seem to wrap my head around I've never seen in documentation. The following example is from the top of the node.js lib http.js.

const { Object } = primordials;

Knowing from whence it came, however, has done little to help me understand what it means; either how it works, or how it is an advantage because of the way it does.

So, I cobbled a time wasting thing (below) I was hoping might help me understand what's going on, but it just made me more confused than when I started the inquiry.

My question is twofold. What is more accepted way to investigate bits of code I don't understand like the one presented, and the second part, will you explain what that method reveals about the sample's meaning and use?

One possible clue for me is the error message, something to do with destructuring, but it still didn't seem to help, and following that path hasn't so far gotten me to a simpler way. Is it alright that when messing with JS, I get nostalgic for pydoc?

In the comments of my attempt, I've included a couple of half-remembered notations which, to my mind, are found in the most fluent and highly regarded of code I've read that, for their inclusion, I have trouble understanding.

// const { Object } = primordials;

const y = {key: 'birthday', val: 99}
const { x } = y
simple(x) // undefined
simple(y) // object y

const n = 88
const { m } = n
simple(m) // undefined
simple(n) // 88

const q = function(){console.log(77);}
const { p } = q
simple(p) // undefined
simple(q) // function q

const b = null
// const { a } = b // error: Cannot destructure property `a` of 'undefined' or 'null'
// simple(a)
simple(b) // null

function simple (z) {
    console.log(z);

}

// !!somevar = 'a_val'
// [someReference] 

test mobile app on several mobiles (various screens)

I created an app with expo (react native) works fine on my mobile, I would like to test it on several mobiles. is there a platform that allows me to do that?

How to find binning for extremely skewed distributions

What I need is to create some test data to put into my test Database. I cannot use live data and when I want to test my logic I need to have "plausible" data in my test Database. Let's say I am working with just one table T (I will need to do the same thing for many tables) that has several columns, some numerical and some categorical. For categorical "dimensions", creating "plausible rows" is easy: if, for instance, I have 3 categories, and I see 50% of the real rows have category A, 30% have category B and 20% have category C, then for each test row that I create I can generate a random number and: if it less than 0.5 I select A, if it is higher than 0.8 I select C, B otherwise.

I would like to have a similar method for continuos dimensions and to that end, I thought about binning. The problem is, I don't know how many bins to use and if it would be better to use bins of different sizes. Ideally, I would like a bin to encompass all the consecutive values that appear with a similar frequency. Unfortunateyl, I have some extremely skewed distributions. Example: column C has value 0 1.4 milion times and in the remaining 0.1 milion rows, it assumes 80K different values, with frequencies ranging from 1 to 250.

I need an algorithm that can handle such extreme situations without the need of a human intervention. A possible distribution here would be: for each row, take a number from 1 to 15. If it is less than 15, the value of the test column is 0, otherwise it is a random value from 0 exclusive up to the maximum value of the column. I am not sure this is a good representation of the table and, most of all, I need to find this distribution authomatically for possible real distributions of value.

I already tried with the Freedman–Diaconis rule but this gives me bins of width 0 because the IQR is 0. Are there other algorithms that I could use?

Thanks a lot

How test directive with Renderer2?

I have created a small directive that prevents default of event(s) passed to it.

@Directive({
    selector: '[sPreventDefault]'
})
export class PreventDefaultDirective {
    private events: (() => void)[] = [];
    @Input('sPreventDefault') set listenOn(events: string | string[]) {
        this.removeListeners();

        if (typeof events == 'string') {
            events = [events];
        }
        this.registerEventListener(
            events,
            e => {
                if (e instanceof Event) {
                    e.stopPropagation();
                } else {
                    e.srcEvent.stopPropagation();
                }
            },
        );
    }

    constructor(private elementRef: ElementRef<HTMLElement>, private renderer: Renderer2) {
        super(elementRef, renderer);
    }

    protected registerEventListener(listenOn: string[], eventListener: (e: Event | HammerJSEvent) => void): void {
        this.events = listenOn.map(eventName => {
            return this.renderer.listen(this.elementRef.nativeElement, eventName, eventListener);
        });
    }
    protected removeListeners(): void {
        this.events.forEach(dispose => dispose());
        this.events = [];
    }
}

Test suit

@Component({
    selector: 'test-host',
    template: `<div [sPreventDefault]="events">`,
})
class TestHostComponent {
    @ViewChild(PreventDefaultDirective) directive!: PreventDefaultDirective;
    @Input() events: PreventDefaultDirective['listenOn'] = [];
}

fdescribe('PreventDefaultDirective', () => {
    let host: TestHostComponent;
    let hostElement: DebugElement;
    let fixture: ComponentFixture<TestHostComponent>;
    let directive: PreventDefaultDirective;

    beforeEach(async(() => {
        TestBed.configureTestingModule({
            declarations: [
                TestHostComponent,
                PreventDefaultDirective,
            ],
        }).compileComponents();

        fixture = TestBed.createComponent(TestHostComponent);
        hostElement = fixture.debugElement;
        host = fixture.componentInstance;
        directive = host.directive;
    }));

    it('should create an instance', () => {
        host.events = ['testEvent'];
        fixture.detectChanges();
        expect(directive).toBeTruthy();
    });

    it('should add listener', () => {
        host.events = ['testEvent'];
        fixture.detectChanges();

        //  DebugElement.listeners is null
        expect(hostElement.listeners.length).toBe(1);
        expect(hostElement.listeners.map(l => l.name)).toBe(host.events);
    });
});

My problem is, that DebugElement, does not seems to know about events registered via Renderer2.listen method. What is the right way to test this?

How can I use puppeteer for react-native?

I want to run e2e automation testing for the app I developed using react-native using puppeteer. May I know how can I do that?

Play framweok WSClient testing

I have a service class in my code, for example, UserService. It has a WSClient member which makes calls to a web service. I would like to write test cases for the UserService methods.

I did some research on testing WSClient but did not find any use case like mine. Should I establish a live test service, or do mocking? This is another question.

class UserService @Inject()(ws: WSClient) {


def getUser(userId: UUID) = {// some code}

def createUser(obj: User) = {// another piece of code}

These methods use the WSClient to call the web service endpoints.

I would like to write test cases for these methods. Which is the best, setting up a test service and call that or mocking the WSClient?

Test void methods with Autowired classes

I am a begginer in Spring and I need to write test for this class if it calls methods:

class ClassOne {

    @Autowired
    AutowiredClass a1;

    @Autowired
    AutowiredClass a2;

    void methodOne() {
        a1.method1();    
    }

    void methodTwo() {
        a2.method2();
    }
}

I've tried to write test, but failed, got NPE:

class ClassOneTest {

    @Autowired
    ClassOneInterface c1i;

    @Test
    public void testMethod1() {
        c1i.methodOne();  // <- NPE appears here..
        Mockito.verify(ClassOne.class, Mockito.times(1));
    }
}

Halp..

Would be great to successfully test void methods.

UAT Website not working in Microsft Edge browser

I have UAT website, which is hosted in client's domain. So on all other browser its working fine, but when i try that to open into microsoft edge it gives me an error. I already tried below steps

  1. I cleared all history, cache,cookies etc.

  2. I have also updated the allow sites from internet options->security->Trusted Sites. And in that I have added UAT website address, still no luck.

Error "Cant reach this page". Is it network issue or microsoft edge issue? If anyone know the resolution on this then kindly let me know.

Edge browser Version - 44.17763

How to find the code coverage of an application for manual testing?

I want to find the code coverage of manual testing and automatic testing (appium) for an Android app (Android studio + gradle).

There are a few questions related to this problem on StackOverflow already but none of them works for me. I have followed the steps of this question: Android Coverage launch with JaCoCo The jacoco.exec file that is being generated in sdcard/ is 37 bytes and is basically empty.

Current configuration of build.gradle:

apply plugin: 'com.android.application'

apply plugin: 'jacoco'
def coverageSourceDirs = [
        '../app/src/main/java'
]

jacoco{
    toolVersion = "0.7.4.201502262128"
}

task jacocoTestReport(type: JacocoReport) {
    group = "Reporting"
    description = "Generate Jacoco coverage reports after running tests."
    reports {
        xml.enabled = true
        html.enabled = true
    }
    classDirectories = fileTree(
            dir: './build/intermediates/classes/debug',
            excludes: ['**/R*.class',
                       '**/*$InjectAdapter.class',
                       '**/*$ModuleAdapter.class',
                       '**/*$ViewInjector*.class'
            ])
    sourceDirectories = files(coverageSourceDirs)
    executionData = files("$buildDir/outputs/code-coverage/connected/coverage.exec")
    doFirst {
        new File("$buildDir/intermediates/classes/").eachFileRecurse { file ->
            if (file.name.contains('$$')) {
                file.renameTo(file.path.replace('$$', '$'))
            }
        }
    }
}
// this is for the report
...


jacoco.java:

package anubhav.calculatorapp;

import android.util.Log;

import java.lang.reflect.Method;

public class jacoco {
    static void generateCoverageReport() {
        String TAG = "jacoco";
        // use reflection to call emma dump coverage method, to avoid
        // always statically compiling against emma jar
        String coverageFilePath = "/sdcard/coverage.exec";
        java.io.File coverageFile = new java.io.File(coverageFilePath);
        try {
            Class<?> emmaRTClass = Class.forName("com.vladium.emma.rt.RT");
            Method dumpCoverageMethod = emmaRTClass.getMethod("dumpCoverageData",
                    coverageFile.getClass(), boolean.class, boolean.class);

            dumpCoverageMethod.invoke(null, coverageFile, false, false);
            Log.e(TAG, "generateCoverageReport: ok");
        } catch (Exception  e) {
            Log.e("jacoco", "inside catch. no emma jar found");
            new Throwable("Is emma jar on classpath?", e);
        }
    }
}

Determine the SymStore hash for the pdb matching a dll

I'm currently setting up some build infrastructure and as part of this process a .NET solution gets built and the symbols are stored in a SymbolSource server. I want to be able to test that Visual Studio will be able to successfully fetch the symbols from the server later. During a normal debugging session, the symbols will be fetched from the server by looking up the SymStore hash, but I can't figure out how it finds that hash. I want to extract it from the dll so I can manually try and fetch symbols from the server and build it into my testing.

Example SymbolSource url: http://myserver.com/SymbolSource/Newtonsoft.Json.pdb/03B6CF6C0E2A41A4B7EDDA4A3856180B1/Newtonsoft.Json.pdb

SymStore hash: 03B6CF6C0E2A41A4B7EDDA4A3856180B1

I can't find this hash inside the dll (Newtonsoft.Json.dll), and it's not the SymStore hash of the dll itself as far as I can tell.

dimanche 23 juin 2019

Clicking on Login button after entering credentials not working during run-time, gives me incorrect cred error

I am sending the correct creds to username and password text boxes on the android emulator, but while clicking on submit button, it says incorrect creds. i use the same creds manually, it accepts the credentials. Is it because i am using sendkeys? or is there some session issue happening here?

driver.findElementByXPath("//android.widget.EditText[@text='Username']").sendKeys("chris");         
Thread.sleep(3000);
MobileElement passwordField = driver.findElementByXPath("//android.widget.EditText[@text='Password']");
passwordField.click();
//entering password
passwordField.sendKeys("abc!");
Thread.sleep(5000);
//clicking on submit
driver.findElementByAccessibilityId("LOGIN ").click();

Kotlin Testing Quickstart

What is the quickest/easiest/best way to add testing to a Kotlin project? I have more experience in the Python ecosystem where I can use pytest or unittest to find classes and/or functions starting or ending with test_ or _test. Is there an analogous project in the Kotlin ecosystem? I found documentation for kotlin.test but I wasn't sure how to get up and running with it. (Also, it's fair to assume that I don't know anything about Maven or Gradle.)

Between top-down, bottom-up and divide-and-conquer what is the appropriate approach to debug sofware issues

I had difficulties running the application due to software issues. As I investigated different possibilities, I found that there were dependency packages that were required in order the application to run.

Once the package were properly installed, then the application ran. I would like to know which approach did I used to accomplish this?

Bottom-Up, Top-Down, or Divide-and-Conquer?

The constructor Kinosaal(int, Kino, int) is undefined

I have to do a cinema management system. But I have a problem with one constructor. The error message (The constructor Kinosaal(int, Kino, int) is undefined) only is shown when I try to use ist in the setUp function of a junit test.

public Kinosaal(int nummer, Kino kino, int kapazitaet) {
    if(kino.enthaeltSaalNummer(nummer)) {
        try {
            this.finalize();
        } catch (Throwable e) {
            e.printStackTrace();
        }
    }else {
        this.nummer = nummer;
        this.kino = kino;
        this.kapazitaet = kapazitaet;
        kino.addKinosaal(this);
    }
}


public static void main(String args[]){
    Kino k1 = new Kino("City");

    m.getUnternehmen().addKino(k1);

    Kinosaal kino1Saal1 = new Kinosaal(1, k1, 750);
}




package de.ostfalia.sitzplatzreservierung;

import static org.junit.Assert.fail;

import java.time.Duration;
import java.time.LocalDateTime;

import org.junit.Before;
import org.junit.Test;
import de.ostfalia.sitzplatzreservierung.Kino;
import de.ostfalia.sitzplatzreservierung.Kinosaal;
import de.ostfalia.sitzplatzreservierung.Filmvorfuehrung;
import de.ostfalia.sitzplatzreservierung.Film;

public class FilmvorfuehrungTest {

@Before
public void setUp() throws Exception {
Film film = new Film("Avengers", Duration.ofMinutes(180));
Kino k1 = new Kino("City");

m.getUnternehmen().addKino(k1);

//Hinzufügen der Kinosäle
Kinosaal kino1Saal1 = new Kinosaal(1, k1, 750);
Filmvorfuehrung film1 = new Filmvorfuehrung(film, 
LocalDateTime.of(2019, 6, 22, 20, 00),kino1Saal1, false);
}
}