mardi 30 avril 2019

Possible to use cypress for e2e with embedded jxbrowser?

I've recently found and fallen in love with Cypress for e2e testing, much more so than Selenium, but there's a catch: our web app will actually be within jxbrowser, inside a java app. Whatever e2e framework we choose has to be able to run all the tests within jxbrowser inside of this embedded application (hence why we originally looked at selenium, because of the remote driver)

Is it possible to get this working? We absolutely need the embedded jxbrowser tests (Cots integration, can't get around it), and I'd hate to have to fall back to selenium.

Test a simple express app: Uncaught ReferenceError: done is not defined

I tried to test a simple express app by following this tutorial.

After running mocha test.js Error I got is: done is not defined (see below)

enter image description here

Full code here

var express = require('express'); // (npm install --save express)
var request = require('supertest');
var app;

function createApp() {
  app = express();

  var router = express.Router();
  router.route('/').get(function(req, res) {
    return res.json({ goodCall: true });
  });

  app.use(router);

  return app;
}

describe('Our server', function() {
  // Called once before any of the tests in this block begin.
  before(function(done) {
    app = createApp();
    app.listen(function(err) {
      if (err) {
        return done(err);
      }
      done();
    });
  });

  it('should send back a JSON object with goodCall set to true', function() {
    request(app)
      .get('/index')
      .set('Content-Type', 'application/json')
      .expect('Content-Type', /json/)
      .expect(200, function(err, res) {
        if (err) {
          return done(err);
        }
        const callStatus = res.body.goodCall;
        expect(callStatus).to.equal(true);
        // Done
        done();
      });
  });
});

How to test onRefresh and OnNavigationStateChange in react native app

I have a WebView component in a react-native app that I am trying to unit test. I ham having difficulty figuring out how to test class methods like onRefreh and onNavigationStateChange.

This is a react-native app using jest/enzyme for testing.


class InlineWebView extends CorePureComponent<Props, State> {
    webView: React.RefObject<WebView> = React.createRef<WebView>();
    state: Readonly<State> = {
        canGoForward: false,
        canGoBack: false,
    };


    onBack = (): void => {
        const { current } = this.webView;
        if (current) {
            current.goBack();
        }
    }

    onForward = (): void => {
        const { current } = this.webView;
        if (current) {
            current.goForward();
        }
    }

    onRefresh = (): void => {
        const { current } = this.webView;
        if (current) {
            current.reload();
        }
    }

    onNavigationStateChange = (navState: NavState): void => {
        this.props.onNavigationStateChange(navState);

        const { canGoForward, canGoBack } = navState;
        this.setState({ canGoForward, canGoBack });
    }

    render(): JSX.Element {
        const { theme, globalThemeVars, url, showControls, onShouldStartLoadWithRequest } = this.props;
        const { canGoBack, canGoForward } = this.state;
        const { container, viewContainer } = createStyles(theme, globalThemeVars);

        const navBar = (showControls) ? (
            <WebNavBar>
                <Button icon="CHEVRON_BACK_THIN" onPress={this.onBack} isDisabled={!canGoBack} />
                <Button icon="CHEVRON_FORWARD_THIN" onPress={this.onForward} isDisabled={!canGoForward} />
                <Button icon="REFRESH" onPress={this.onRefresh} />
            </WebNavBar>
        ) : null;

        return (
            <View style={container}>
                <View style={viewContainer}>
                    <WebView
                        ref={this.webView}
                        source=
                        onShouldStartLoadWithRequest={onShouldStartLoadWithRequest}
                        onNavigationStateChange={this.onNavigationStateChange}
                    />
                </View>
                {navBar}
            </View>
        );
    }
}

export default withCore<Props>(InlineWebView, defaultTheme, {
    showControls: true,
    onShouldStartLoadWithRequest: (): boolean => true,
    onNavigationStateChange: (): void => { /* noop */ },
    theme: {},
});

and here are some tests:

describe("WebView", () => {
    let wrapper;

    beforeEach(() => {
        wrapper = shallow(
            <WebView
                url="www.example.com"
                showControls={true}
                theme=
                globalThemeVars=
                onShouldStartLoadWithRequest={mockNavChange}
                onNavigationStateChange={mockLoadRequest}
            />,
        );
    });

    test("renders correctly", () => {
        const tree = renderer.create(<WebView />).toJSON();
        // console.log("TREE", tree.children[0].children[0].children)
        expect(tree).toMatchSnapshot();
    });

    it("passes props", () => {
        expect(wrapper.props().style).toEqual(100);
        expect(wrapper.props().children[0].props.style).toEqual(200);
    });

    it("calls onNavigationStateChange", () => {
        wrapper.setState({
            canGoForward: false,
            canGoBack: false,
        })
        wrapper.instance().onNavigationStateChange();
        console.log('NAV STATE CHANGE', wrapper.props().children[0].props.children.props.onNavigationStateChange);
    })

I am getting an error on this last one saying: TypeError: Cannot destructure propertycanGoForwardof 'undefined' or 'null'.

Any advice on how t test onRefresh and onNavigationStateChange?

What is the easiest way to test error handling when writing to a file in Perl?

I have a bog standard Perl file writing code with (hopefully) adequate error handling, of the type:

open(my $fh, ">", "$filename") or die "Could not open file $filname for writing: $!\n";
# Some code to get data to write
print $fh $data  or die "Could not write to file $filname: $!\n";
close $fh  or die "Could not close file $filname afterwriting: $!\n";

(I just wrote this code from memory, pardon any typos or bugs)

It is somewhat easy to test error handling in the first "die" line (for example, create a non-writable file with the same name you plan to write).

How can I test error handling in the second (print) and third (close) "die" lines?

The only way I know of to induce error when closing is to run out of space on filesystem while writing, which is NOT easy to do as a test.

I would prefer integration test type solutions rather than unit test type (which would involve mocking IO methods in Perl).

Node in Docker: npm test and exit

I want to test my node docker image with "npm run test" as a command overwrite when running my container.

My Dockerfile is:

FROM node:alpine
WORKDIR /app
COPY ./package.json ./
RUN npm install
COPY ./ ./
CMD ["npm", "run", "start"]

The "npm run test" command should be run in my container and exit to the terminal (locally and Travis CI) but the test run is stuck at "Ran all test suites." waiting for input.

My docker run command is:

docker run myimage npm run test -- --coverage

I also tried with:

docker run myimage npm run test -- --forceExit

But none of them exits when the test have run (neither locally or in Travis CI).

My App.test.js file is the standard test:

import React from 'react';
import ReactDOM from 'react-dom';
import App from './App';

it('renders without crashing', () => {
  const div = document.createElement('div');
  ReactDOM.render(<App />, div);
  ReactDOM.unmountComponentAtNode(div);
});

What should I do to automatically exit the test when it is finished?

Possible Test Scenarios for a Kafka Consumer that Pushes Records to the S3 After Processing

I have a Kafka consumer that:

  1. Consumes records from Kafka.
  2. Processes each one of them in parallel by calling three down-stream services.
  3. Pushes a final processed documents (corresponding to each record) into the S3.

Some additional info:

  1. I am using commitAsync(..);

  2. I am using Spring Reactor.

Apart from the happy case, what should be the possible scenarios that I should cover? Considering that I am processing an X amount of messages per poll(..) and processing and committing them all in parallel? I want my entire program to be tested as harshly as possible.

pytest test parameterization override

I am currently parametrizing all of my testcases using pytest_generate_tests and this works well.

What I'd like to do now is override this behavior for a specific test. If I try and use the pytest.mark.parametrize decorator on the test itself, I get a ValueError: duplicate error which is understandable as I'm now trying to parametrize the test in two places.

Is there a way I can override the "default" parameterization for this one test case?

I can achieve this by doing something like the below but its a very hacky way to do it:

def pytest_generate_tests(metafunc):
    fixture_modes = ['mode1', 'mode2']
    if 'fixture' in metafunc.fixturenames:
        fixture  = metafunc.config.getoption('fixture')
        if fixture:
            fixture_modes = [fixture]
        if metafunc.function.__name__ != 'func_to_skip':
            metafunc.parametrize('fixture_mode', fixture_modes, indirect=True)

Is there a better way to do this?

Linux/Windows performance comparison of MATLAB against other library using containers

I need to compare the performance of MATLAB and an open-source alternative (eg numpy/scipy) on particular matrices problems on both Windows and Linux. It has been expressly asked that the comparison is executed strictly on the same hardware platform - for obvious comparison reasons.

My question is, would using containers (Windows and Linux distro images) satisfy this requirement?

I believe setting two VMs would satisfy the requirement, but using containers would be much less of a hassle and make the tests easily reproducible on any machine, but I'm not too familiar with their architecture or how they access their host's hardware.

Thanks in advance for any help.

How to make a test on a regex

I'm new in Django and I have just started to make tests on my web application. Firstly I'm not sure if it's necessary to run tests on regex model fields (like the one in my code), secondly, if the testing is necessary, how can I do it?

I've already tried this solution: Unit Tests pass against regex validator of models in Django but it's not working. cf field needs a 16 chars string, but my function test_cf_max_length() is returning seller object even if the cf entered is wrong(<16 chars)

models.py

class Seller(User):
    cf = models.CharField(validators=[RegexValidator(regex='^.{16}$', message='Social Security Number', code='nomatch')], max_length=16)
    iban = models.CharField(validators=[RegexValidator(regex='^.{27}$', message='IBAN', code='nomatch')], max_length=27)
    is_seller = models.BooleanField(default=False)

tests.py

def setUpTestData(cls):
    Seller.objects.create(username='clara', cf='12345690123456', iban='123456789012345678901234567')


def test_cf_max_length(self):
    seller = Seller.objects.get(id=1)
    with self.assertRaises(ValidationError):
        if seller.full_clean():
            seller.save()
    self.assertEqual(Seller.objects.filter(id=1).count(), 0)

Is there any tool to count the number of lines of test code in Rust?

I am aware of Tokei and loc both of which tell the number of code lines and comment lines, but I'd really like to know the number of lines of test code. I'm primarily interested in this for Rust, but support for other languages with standard test tools, such as Elixir and Haskell, would also be cool.

How to test golang code on concourse not using copy command?

In Concourse I used topflighttech/go-testing docker container to test my-go-api. I find that the simplest solution is just copy src code to /go/src/my-go-api and test it. But I wondering how we could test on concourse straightforward without copy to /go/src/myapi? So the image could stay smaller.

Of course mv is not working. Here is output from Concourse.

+ mv my-go-api /go/src
mv: can't remove 'my-go-api': Resource busy

Is There A Tool To Measure Effectiveness of Django Tests [on hold]

I am trying to gain some kind of metric to show how effective our Django tests are. We have a Django app with a moderately large API surface area, a suite of tests, some of which are skipped. We are aware that there are gaps in what is tested.

The Django app uses django-rest-framework and Django filters libraries. There are class based views based on DRF generic views. This means that the code is highly declarative in nature. Some of those declarations have complicated filters.

We are wanting a metric to show how many of the API URL's are visited when ./manage.py tests is called. Ideally it show what filters have been applied in those views, to gain some idea of what is not exercised in test at all.

The usual python tool, coverage, does not give a good metric for this as it only shows lines executed and the declarative code is always executed upon reading the declaration, despite the filters/views/columns it declares not being used at all.

I have been trying to find a tool that gives API or feature visiting metrics for Django and have turned up a blank. I would be surprised if this is a completely new concept given the rich Django ecosystem. What tools are there in the ecosystem that would do this?

How to pass different data to multiple instances of the same test with Robo

What am I trying to achieve?

I made a test which logs into a web application I have, completes an exam and submits it at the end.

I need to test how many users the server can handle so my intention is to run the same test multiple times, but I need to be able to pass each instance a different user:password.

Right now I have an array with the credentials inside the test itself, and I made it so it takes a random item from the array. This works but obviously sometimes it takes the same credentials 2 times so this isnt a reliable solution.

This is my RoboFile:

public function parallelSplitTests()
  {
   // Split your tests by files
        $this->taskSplitTestFilesByGroups(5)
            ->projectRoot('.')
            ->testsFrom('tests/acceptance')
            ->groupsTo('tests/_data/paracept_')
            ->run();
    }
public function parallelRun()
    {
    $parallel = $this->taskParallelExec();

             for ($i = 1; $i <= 1; $i++) {

             $parallel->process(
              $this->taskCodecept() // use built-in Codecept task
            ->suite('acceptance') // run acceptance tests
            ->group("paracept_$i") // for all paracept_* groups
                ->xml("tests/_log/result_$i.xml") // save XML results
                );
            }

             return $parallel->run();
           }

Thanks in advance.

How to use python as the directiry script on Minecraft education(Java script based program)?

I'm working with the Minecraft Education program which is a Java script based environment(also has code block on it), and I would like to know what method to use in order to import python as the directory script, executing the same functions and features it provides with python instead of Java script

Jenkins build unable to initiate cucumber TESTS and times out

I'm new to Jenkins. My basic Jenkins configuration did work earlier nicely but since I did report configuration the TESTS aren't initiating anymore. I installed 'Email Extension' and 'Email Extension Template' plugins that started to cause the problem (explained) below but I now have removed them but the problem persists.

I see the spinning wheel under the last line (below) and nothing happens.

[INFO] -------------------------------------------------------
[INFO]  T E S T S
[INFO] -------------------------------------------------------
[INFO] Running cucumber.CucumberRunner
Starting ChromeDriver 2.41.578737 (49da6702b16031c40d63e5618de03a32ff6c197e) on port 8791
Only local connections are allowed.

After sometime (18-20mins) the build just times out with a 'Failed to instantiate class stepDefinitions.LoginSUT'.

[INFO] 
[INFO] -------------------------------------------------------
[INFO]  T E S T S
[INFO] -------------------------------------------------------
[INFO] Running cucumber.CucumberRunner
Starting ChromeDriver 2.41.578737 (49da6702b16031c40d63e5618de03a32ff6c197e) on port 8791
Only local connections are allowed.
[1556614819.067][SEVERE]: Timed out receiving message from renderer: 600.000
Starting ChromeDriver 2.41.578737 (49da6702b16031c40d63e5618de03a32ff6c197e) on port 8470
Only local connections are allowed.
[1556615421.440][SEVERE]: Timed out receiving message from renderer: 600.000
Starting ChromeDriver 2.41.578737 (49da6702b16031c40d63e5618de03a32ff6c197e) on port 10001
Only local connections are allowed.
[1556616023.903][SEVERE]: Timed out receiving message from renderer: 600.000
7 Scenarios (7 failed)
28 Steps (7 failed, 21 skipped)
30m7.588s

Here is the snapshot from the Jenkins test results. Any suggestions would be greatly appreciated. enter image description here

How to wait for bootstrap modal animation to finish in Vue jest test

Is it possible to wait for modal animation to finish in Vue jest test? My problem is that when I click on the button (button.add_modal), div.my-modal should get a class "show", but after triggering my button, seems that the class isn't attached in my html (in jest test), in dev tools in works fine.

    wrapper.find('button.add_modal').trigger('click');
    //This should return true, but I'm getting false.
    expect(wrapper.contains('div#my-modal.show')).toBe(true);

Testing if user is not login, redirect to login page

Just testing if my application will redirect to the login page if the user is not logged in. The url /confirmation/TEST01 where TEST01 is a reverse url. It should redirect to /accounts/login/?next=/confirmation/TEST01

But I am getting an error:

AssertionError: 404 != 302 : Response didn't redirect as expected: Response code was 404 (expected 302)

Test.py

def test_redirect_to_login(self):
    response = self.client.get('/confirmation/TEST01/')
    self.assertRedirects(response, '/accounts/login/?next=/confirmation/TEST01', status_code=302, target_status_code=404, fetch_redirect_response=True)

Terminal

[30/Apr/2019 08:45:01] "GET /confirmation/XZC456 HTTP/1.1" 302 0
Not Found: /accounts/login/
[30/Apr/2019 08:45:01] "GET /accounts/login/?next=/confirmation/XZC456 HTTP/1.1" 404 2386

"Can't open perl script "t/spec/fudgeall": No existe el archivo o el directorio"

I am trying to run some tests for Rakudo following instructions in the README.md, that is, with perl Configure.pl and make. However, when I run

make t/02-rakudo/09-thread-id-after-await.t 

after that, it writes:

/home/jmerelo/perl5/perlbrew/perls/perl-5.20.0/bin/perl tools/build/check-nqp-version.pl /home/jmerelo/Code/forks/perl6/rakudo/install/bin/nqp-m
rm -f -- perl6
cp -- perl6-m perl6
chmod -- 755 perl6
/home/jmerelo/perl5/perlbrew/perls/perl-5.20.0/bin/perl t/harness5 --fudge --moar --keep-exit-code --verbosity=1 t/02-rakudo/09-thread-id-after-await.t
Can't open perl script "t/spec/fudgeall": No existe el archivo o el directorio
Files=0, Tests=0,  0 wallclock secs ( 0.01 usr +  0.00 sys =  0.01 CPU)
Result: NOTESTS

That directory does not even exist. Any idea?

Can we automatically generate test cases for a desktop application, which in this case is JEdit? If yes, please let me know how

The problem is all about automatically generating test cases for Jedit application which is written in Java. For more information about JEdit, please click here!

I tried to use Randoop which can generate unit test cases for a class file or multiple class files and failed miserably.

  1. Find software that could automatically generate test cases for desktop applications built on Java.
  2. Generate at least 100 test cases from the application using the test case generation software.

Can anyone suggest what is the approach for IOT Testing?

This question is related to Testing and gives some tips about IOT Testing

lundi 29 avril 2019

Test automation for integration testing

How would you prepare integration tests between two systems if you cannot prepare test data to confirm that an already added item on system A (for which I have no access to make a POST or SQL insert) is visible after a GET request on system B?

Is there a standardised way to perform testing for Bot Builder in C#?

What are good ways to perform testing for Bot Framework v4 in C#? What would be a systematic way to test, the different branches and flows in a sequence of dialogs and to check if the postback data from an Adaptive Card form is generating the correct response or attachment? Thanks!

Did anyone experimented anything around tweaking the Randoop code to automate the generation of unit test cases?

I am thinking of tweaking/modifying the Randoop code to fulfil the requirements as per the needs. Did anyone experimented that before. If yes, any challenges you faced? If no, any suggestions?

Why to group related tests

What are the use cases for grouping tests using TestNG groups or JUnit categories?

I usually groups by tests by function using JUnit categories: unit-tests, integration-tests, etc. But at my current team, we just had a conversation and the team decided we want to run all the tests all the time because they don't see any benefits for grouping tests.

So I'm curious to know if people here group their tests and why.

go test fails on import of package in module under test

What is the correct way to import a package in the same module? When I use a fully qualified module path, the build works correctly but testing fails due to the import:

$ tree
.
├── app
│   └── main.go
├── go.mod
└── lib
    └── something.go

2 directories, 3 files

$ cat go.mod
module github.com/myname/mypackage

go 1.12

$ cat app/main.go 
package main

import (
    "fmt"
    "github.com/myname/mypackage/lib"
)

func main() {
    fmt.Println(lib.Var1)
}

$ cat lib/something.go 
package lib

var Var1 = 42

$ go build app/main.go 
$ ./main
42

$ go test
can't load package: package github.com/myname/mypackage: unknown import path "github.com/myname/mypackage": cannot find module providing package github.com/myname/mypackage

The module is not in the go path.

How to get Author Attribute when it's set for TestFixture level instead of Test level?

I want to set an Author for testFixture and get it at runtime to be able to fetch it in my DB :

Here is the TestFixture class I set Nunit attribut Author for testFixture level :

[TestFixture(Author = nameof(AuthorName.Eva)),MachineCategory.Filter]
public class FilterTests : WidiaTestSetup
{  

        [Test, RetryOnFailureOrException]
        public void FilterCuttingDiameter()
        { ...}
} 

To get the test Author from test level I do :

Author = TestContext.CurrentContext.Test.Properties.Get("Author").ToString()

But It is not working from testFixture How can I do to get it from TestFixture level ?

Thnaks in advance

io.Writer /dev/null like object in golang?

I provide io.Writer to server to direct logs (e.g. to file or Stderr).

However, in tests I would like to skip those logs. I wonder, is there io.Writer object that acts like '/dev/null' ?

(Yes, of course I can try to open /dev/null, but for tests and in general I've thought it may be more elegant to make it as object)

How to unit Test a Flutter TextFormField maxlines

Is it possible to write a unit test that verifies that the maxLines property of a TextFormField is set correctly. I can not find a way to access the property:

i create a TextFormField

final field = TextFormField(
    initialValue: "hello",
    key: Key('textformfield'),
    maxLines: 2,
  );

then in the test i get access to the form field with tester.widget

 final formfield =
    await tester.widget<TextFormField>(find.byKey(Key('textformfield')));

but since the maxLines property is passed to the Builder which returns a Textfield, how can i get access to the textfield.

Or is there an completely other ways to verify this?

Testing Xtext language with OSGi anf Maven?

I am developping an Xtext language (named "thezenfra" in the stack below). Within a class in this language I use a bundle in the form of a JAR (named "thpackme" in the stack below). I added it in the MANIFEST.MF using "Bundle-ClassPath:". I am using maven to maintain my project. I use tycho-surefire plugin to execute the tests. When maven enters testing I got this errors. It seems that it does not find the included JAR. Do you see where my problem come from ?

[INFO] --- tycho-surefire-plugin:1.0.0:test (default-test) @ org.atlanmod.thezenfra.tests ---
[INFO] Expected eclipse log file: /home/imad/dev/main/eclipse/phd/ThezenFRA/org.atlanmod.thezenfra.tests/target/work/data/.metadata/.log
[INFO] Command line:
    [/home/imad/dev/tools/jdk/jdk1.8.0_151/jre/bin/java, -Dosgi.noShutdown=false, -Dosgi.os=linux, -Dosgi.ws=gtk, -Dosgi.arch=x86_64, -javaagent:/home/imad/.m2/repository/org/jacoco/org.jacoco.agent/0.8.1/org.jacoco.agent-0.8.1-runtime.jar=destfile=/home/imad/dev/main/eclipse/phd/ThezenFRA/org.atlanmod.thezenfra.tests/target/jacoco.exec, -Dosgi.clean=true, -jar, /home/imad/.m2/repository/p2/osgi/bundle/org.eclipse.equinox.launcher/1.4.0.v20161219-1356/org.eclipse.equinox.launcher-1.4.0.v20161219-1356.jar, -data, /home/imad/dev/main/eclipse/phd/ThezenFRA/org.atlanmod.thezenfra.tests/target/work/data, -install, /home/imad/dev/main/eclipse/phd/ThezenFRA/org.atlanmod.thezenfra.tests/target/work, -configuration, /home/imad/dev/main/eclipse/phd/ThezenFRA/org.atlanmod.thezenfra.tests/target/work/configuration, -application, org.eclipse.tycho.surefire.osgibooter.headlesstest, -testproperties, /home/imad/dev/main/eclipse/phd/ThezenFRA/org.atlanmod.thezenfra.tests/target/surefire.properties]

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.atlanmod.thezenfra.tests.ThezenfraParsingTest
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 5.022 sec <<< FAILURE! - in org.atlanmod.thezenfra.tests.ThezenfraParsingTest
org.atlanmod.thezenfra.tests.ThezenfraParsingTest  Time elapsed: 5.004 sec  <<< ERROR!
com.google.inject.CreationException: Guice creation errors:

1) Error injecting constructor, java.lang.NoClassDefFoundError: Could not initialize class org.atlanmod.thezenfra.thezenFRA.ThezenFRAPackage
  at org.atlanmod.thezenfra.scoping.ThezenfraScopeProvider.<init>(Unknown Source)
  while locating org.atlanmod.thezenfra.scoping.ThezenfraScopeProvider
  while locating org.eclipse.xtext.xbase.scoping.batch.IBatchScopeProvider
    for field at org.eclipse.xtext.xbase.typesystem.internal.AbstractBatchTypeResolver.scopeProvider(Unknown Source)
  while locating org.eclipse.xtext.xbase.typesystem.internal.CachingBatchTypeResolver
  while locating org.eclipse.xtext.xbase.typesystem.IBatchTypeResolver
    for field at org.eclipse.xtext.xbase.compiler.ElementIssueProvider$Factory.typeResolver(Unknown Source)
  while locating org.eclipse.xtext.xbase.compiler.ElementIssueProvider$Factory
  while locating org.eclipse.xtext.xbase.compiler.IElementIssueProvider$Factory
    for field at org.eclipse.xtext.xbase.compiler.ErrorSafeExtensions.issueProviderFactory(Unknown Source)
  while locating org.eclipse.xtext.xbase.compiler.ErrorSafeExtensions
    for field at org.eclipse.xtext.xbase.compiler.JvmModelGenerator._errorSafeExtensions(Unknown Source)
  while locating org.eclipse.xtext.xbase.compiler.JvmModelGenerator
    for field at org.eclipse.xtext.xbase.jvmmodel.JvmModelCompleter.generator(Unknown Source)
  while locating org.eclipse.xtext.xbase.jvmmodel.JvmModelCompleter
    for field at org.eclipse.xtext.xbase.jvmmodel.JvmModelAssociator.completer(Unknown Source)
  at org.eclipse.xtext.xbase.jvmmodel.JvmModelAssociator.class(Unknown Source)
  while locating org.eclipse.xtext.xbase.jvmmodel.JvmModelAssociator
  while locating org.eclipse.xtext.xbase.jvmmodel.ILogicalContainerProvider
    for field at org.eclipse.xtext.xbase.validation.XbaseValidator.logicalContainerProvider(Unknown Source)
  at org.eclipse.xtext.service.MethodBasedModule.configure(MethodBasedModule.java:57)
  while locating org.atlanmod.thezenfra.validation.ThezenfraValidator
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.atlanmod.thezenfra.thezenFRA.ThezenFRAPackage
    at org.atlanmod.thezenfra.scoping.ThezenfraScopeProvider.<init>(ThezenfraScopeProvider.java:32)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at com.google.inject.internal.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:85)
    at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85)
    at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
    at com.google.inject.internal.FactoryProxy.get(FactoryProxy.java:54)
    at com.google.inject.internal.SingleFieldInjector.inject(SingleFieldInjector.java:53)
    at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)
    at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)
    at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
    at com.google.inject.internal.InjectorImpl$3.get(InjectorImpl.java:737)
    at com.google.inject.internal.SingleFieldInjector.inject(SingleFieldInjector.java:53)
    at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)
    at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)
    at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
    at com.google.inject.internal.InjectorImpl$3.get(InjectorImpl.java:737)
    at com.google.inject.internal.SingleFieldInjector.inject(SingleFieldInjector.java:53)
    at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)
    at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)
    at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
    at com.google.inject.internal.SingleFieldInjector.inject(SingleFieldInjector.java:53)
    at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)
    at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)
    at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
    at com.google.inject.internal.SingleFieldInjector.inject(SingleFieldInjector.java:53)
    at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)
    at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)
    at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
    at com.google.inject.internal.SingleFieldInjector.inject(SingleFieldInjector.java:53)
    at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)
    at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)
    at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
    at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
    at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
    at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
    at com.google.inject.Scopes$1$1.get(Scopes.java:65)
    at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
    at com.google.inject.internal.InjectorImpl$3.get(InjectorImpl.java:737)
    at com.google.inject.internal.SingleFieldInjector.inject(SingleFieldInjector.java:53)
    at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)
    at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)
    at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
    at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
    at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
    at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
    at com.google.inject.Scopes$1$1.get(Scopes.java:65)
    at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
    at com.google.inject.internal.InternalInjectorCreator$1.call(InternalInjectorCreator.java:204)
    at com.google.inject.internal.InternalInjectorCreator$1.call(InternalInjectorCreator.java:198)
    at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1024)
    at com.google.inject.internal.InternalInjectorCreator.loadEagerSingletons(InternalInjectorCreator.java:198)
    at com.google.inject.internal.InternalInjectorCreator.injectDynamically(InternalInjectorCreator.java:179)
    at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:109)
    at com.google.inject.Guice.createInjector(Guice.java:95)
    at com.google.inject.Guice.createInjector(Guice.java:72)
    at com.google.inject.Guice.createInjector(Guice.java:62)
    at org.atlanmod.thezenfra.tests.ThezenfraInjectorProvider$1.createInjector(ThezenfraInjectorProvider.java:39)
    at org.atlanmod.thezenfra.ThezenfraStandaloneSetupGenerated.createInjectorAndDoEMFRegistration(ThezenfraStandaloneSetupGenerated.java:23)
    at org.atlanmod.thezenfra.tests.ThezenfraInjectorProvider.internalCreateInjector(ThezenfraInjectorProvider.java:41)
    at org.atlanmod.thezenfra.tests.ThezenfraInjectorProvider.getInjector(ThezenfraInjectorProvider.java:29)
    at org.atlanmod.thezenfra.tests.ThezenfraInjectorProvider.setupRegistry(ThezenfraInjectorProvider.java:63)
    at org.eclipse.xtext.testing.XtextRunner.methodBlock(XtextRunner.java:43)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
    at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
    at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
    at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray2(ReflectionUtils.java:208)
    at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:156)
    at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:82)
    at org.eclipse.tycho.surefire.osgibooter.OsgiSurefireBooter.run(OsgiSurefireBooter.java:95)
    at org.eclipse.tycho.surefire.osgibooter.HeadlessTestApplication.run(HeadlessTestApplication.java:21)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.eclipse.equinox.internal.app.EclipseAppContainer.callMethodWithException(EclipseAppContainer.java:587)
    at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:198)
    at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:134)
    at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:104)
    at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:388)
    at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:243)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:653)
    at org.eclipse.equinox.launcher.Main.basicRun(Main.java:590)
    at org.eclipse.equinox.launcher.Main.run(Main.java:1499)
    at org.eclipse.equinox.launcher.Main.main(Main.java:1472)


2) Error injecting method, java.lang.NoClassDefFoundError: org/thepackme/xtext/thepackME/ThepackMEPackage
  at org.eclipse.xtext.validation.AbstractInjectableValidator.register(Unknown Source)
  at org.eclipse.xtext.service.MethodBasedModule.configure(MethodBasedModule.java:57)
  while locating org.atlanmod.thezenfra.validation.ThezenfraValidator
Caused by: java.lang.NoClassDefFoundError: org/thepackme/xtext/thepackME/ThepackMEPackage
    at org.atlanmod.thezenfra.thezenFRA.impl.ThezenFRAPackageImpl.init(ThezenFRAPackageImpl.java:373)
    at org.atlanmod.thezenfra.thezenFRA.ThezenFRAPackage.<clinit>(ThezenFRAPackage.java:59)
    at org.atlanmod.thezenfra.validation.AbstractThezenfraValidator.getEPackages(AbstractThezenfraValidator.java:16)
    at org.eclipse.xtext.validation.AbstractInjectableValidator.register(AbstractInjectableValidator.java:46)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at com.google.inject.internal.SingleMethodInjector$1.invoke(SingleMethodInjector.java:71)
    at com.google.inject.internal.SingleMethodInjector.inject(SingleMethodInjector.java:90)
    at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)
    at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)
    at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
    at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
    at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
    at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
    at com.google.inject.Scopes$1$1.get(Scopes.java:65)
    at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
    at com.google.inject.internal.InternalInjectorCreator$1.call(InternalInjectorCreator.java:204)
    at com.google.inject.internal.InternalInjectorCreator$1.call(InternalInjectorCreator.java:198)
    at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1024)
    at com.google.inject.internal.InternalInjectorCreator.loadEagerSingletons(InternalInjectorCreator.java:198)
    at com.google.inject.internal.InternalInjectorCreator.injectDynamically(InternalInjectorCreator.java:179)
    at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:109)
    at com.google.inject.Guice.createInjector(Guice.java:95)
    at com.google.inject.Guice.createInjector(Guice.java:72)
    at com.google.inject.Guice.createInjector(Guice.java:62)
    at org.atlanmod.thezenfra.tests.ThezenfraInjectorProvider$1.createInjector(ThezenfraInjectorProvider.java:39)
    at org.atlanmod.thezenfra.ThezenfraStandaloneSetupGenerated.createInjectorAndDoEMFRegistration(ThezenfraStandaloneSetupGenerated.java:23)
    at org.atlanmod.thezenfra.tests.ThezenfraInjectorProvider.internalCreateInjector(ThezenfraInjectorProvider.java:41)
    at org.atlanmod.thezenfra.tests.ThezenfraInjectorProvider.getInjector(ThezenfraInjectorProvider.java:29)
    at org.atlanmod.thezenfra.tests.ThezenfraInjectorProvider.setupRegistry(ThezenfraInjectorProvider.java:63)
    at org.eclipse.xtext.testing.XtextRunner.methodBlock(XtextRunner.java:43)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
    at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
    at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
    at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray2(ReflectionUtils.java:208)
    at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:156)
    at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:82)
    at org.eclipse.tycho.surefire.osgibooter.OsgiSurefireBooter.run(OsgiSurefireBooter.java:95)
    at org.eclipse.tycho.surefire.osgibooter.HeadlessTestApplication.run(HeadlessTestApplication.java:21)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.eclipse.equinox.internal.app.EclipseAppContainer.callMethodWithException(EclipseAppContainer.java:587)
    at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:198)
    at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:134)
    at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:104)
    at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:388)
    at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:243)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:653)
    at org.eclipse.equinox.launcher.Main.basicRun(Main.java:590)
    at org.eclipse.equinox.launcher.Main.run(Main.java:1499)
    at org.eclipse.equinox.launcher.Main.main(Main.java:1472)
Caused by: java.lang.ClassNotFoundException: org.thepackme.xtext.thepackME.ThepackMEPackage cannot be found by org.atlanmod.thezenfra_1.0.0.201904292021
    at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:484)
    at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:395)
    at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:387)
    at org.eclipse.osgi.internal.loader.ModuleClassLoader.loadClass(ModuleClassLoader.java:150)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 71 more

16 errors
    at com.google.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:435)
    at com.google.inject.internal.InternalInjectorCreator.injectDynamically(InternalInjectorCreator.java:183)
    at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:109)
    at com.google.inject.Guice.createInjector(Guice.java:95)
    at com.google.inject.Guice.createInjector(Guice.java:72)
    at com.google.inject.Guice.createInjector(Guice.java:62)
    at org.atlanmod.thezenfra.tests.ThezenfraInjectorProvider$1.createInjector(ThezenfraInjectorProvider.java:39)
    at org.atlanmod.thezenfra.ThezenfraStandaloneSetupGenerated.createInjectorAndDoEMFRegistration(ThezenfraStandaloneSetupGenerated.java:23)
    at org.atlanmod.thezenfra.tests.ThezenfraInjectorProvider.internalCreateInjector(ThezenfraInjectorProvider.java:41)
    at org.atlanmod.thezenfra.tests.ThezenfraInjectorProvider.getInjector(ThezenfraInjectorProvider.java:29)
    at org.atlanmod.thezenfra.tests.ThezenfraInjectorProvider.setupRegistry(ThezenfraInjectorProvider.java:63)
    at org.eclipse.xtext.testing.XtextRunner.methodBlock(XtextRunner.java:43)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
    at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
    at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
    at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray2(ReflectionUtils.java:208)
    at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:156)
    at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:82)
    at org.eclipse.tycho.surefire.osgibooter.OsgiSurefireBooter.run(OsgiSurefireBooter.java:95)
    at org.eclipse.tycho.surefire.osgibooter.HeadlessTestApplication.run(HeadlessTestApplication.java:21)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.eclipse.equinox.internal.app.EclipseAppContainer.callMethodWithException(EclipseAppContainer.java:587)
    at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:198)
    at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:134)
    at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:104)
    at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:388)
    at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:243)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:653)
    at org.eclipse.equinox.launcher.Main.basicRun(Main.java:590)
    at org.eclipse.equinox.launcher.Main.run(Main.java:1499)
    at org.eclipse.equinox.launcher.Main.main(Main.java:1472)


Results :

Tests in error: 
  ThezenfraParsingTest.org.atlanmod.thezenfra.tests.ThezenfraParsingTest » Creation Gu...

Tests run: 1, Failures: 0, Errors: 1, Skipped: 0

System Test & Verification

1.Testing Maturity Levels. Select from among the terms (x1,x2,x3,x4,x5) to fill-in the correct term next to the Testing level. For example Level 3 is x5. Testing level Terms a.Level 0______ x1. purpose is to show the software does not work
b.Level 1______ x3. no difference between testing and debugging c.Level 2______ x2. test is a mental discipline that helps develop higher quality software d.Level 3__x5__ x4. not defined e.Level 4______ x5. purpose of testing is to reduce the risk of using the software
f.Level 5______ x6. purpose of testing is to show correctness

How a passed dictionary in setUp() method can cause changing the user's password here?

I am learning Django testing. I recently saw these codes, I searched a bit but still couldn't convince myself how they actually work together. Here are the codes:

class PasswordChangeTestCase(TestCase):
    def setUp(self, data={}):
        self.user = User.objects.create_user(username='john', email='john@doe.com', password='old_password')
        self.url = reverse('password_change')
        self.client.login(username='john', password='old_password')
        self.response = self.client.post(self.url, data) 

class SuccessfulPasswordChangeTests(PasswordChangeTestCase):
    def setUp(self):
        super().setUp({ # where do this dictionary is passed in?
            'old_password': 'old_password',
            'new_password1': 'new_password',
            'new_password2': 'new_password',
        })


    def test_password_changed(self):
        self.user.refresh_from_db()
        self.assertTrue(self.user.check_password('new_password'))

Can you please explain What does super().setUp({#keyValues}) does in here? I know what aresuper() and setUp(), but I don't know what does it means when they are next to each other specially when a dictionary is passed into setUp?

testing_password_changed function confused me as well. Where in these parts of the code the old password is changed with new password that it refresh the db with self.user.refresh_from_db() function. Changing old password with new password is done in PasswordChangeForm I expected something like self.client.post(password_change_url, data), or maybe I didn't understand them properly. Please help me

Thank you for reading and helping.

How to test URL args of POST request?

I have a POST view, which also get GET argument ?company= from URL. How to test it using Django Rest Framework test tools?

Full AWS Stack System Testing in Python

I have lots of unit tests for individual components in my AWS Stack, but i'd like to do functional tests across multiple asynchronous systems.

These systems are asynchronous due to the queues they operate on. One application may take an event and generate multiple queue items in different queues to be processed downstream. Ultimately, i'd like to put an event in one end of my system and see the results at the other end of the system. I already do this manually, but there are many variations i'd like to test, and full-system regression testing for every feature release is becoming a bear to manage.

It often takes minutes for a unit of work to make it through all of the queues, streams, and processes and settle on the other side.

function testing

I do know how I could test something like this. But it's not very pretty.

def test_event():
    event = { 'user_id': 1, 'action': 'like_post', 'data': {} }

    put_event_to_queue(event)

    retry_count = 0
    success = False
    while retry_count < 3 and not success:
      time.sleep(60)

      if (is_action_in_rds()
          and is_action_in_elasticsearch()
          and is_action_in_athena()):
          success = True

      retry_count += 1

    self.assertTrue(success)

My approach would be to put something in the queue to process and check if the result exists after some arbitrary amount of time.

Would this be the standard approach? Is there a testing framework that lends itself best to this sort of testing?

I need a step by step tutorial or (documents) teaching restassured API test framework using selenium and java

I need a step by step tutorial teaching rest_assured API test framework using selenium and java. I am kind of new to rest-assured API test automation framework and I wanted to learn the advance level of API testing using automation. I understand that there are tons of tutorials available online but they all are only on basic setup, and not fully explained. Please help, I will appreciate your feedback.

How i am must test this method with mockito?

Method in service class:

@Override
@Transactional(readOnly = true)
public Collection<Account> searchAccounts(String partOfName) {
    Collection<Account> accounts = accountDao.getAll();
    CollectionUtils.filter(accounts, account ->
            account.getName().toLowerCase().contains(partOfName.toLowerCase()) ||
                    account.getSurname().toLowerCase().equalsIgnoreCase(partOfName.toLowerCase()));
    return accounts;
}

I am not understand what i must do with CollectionUtils.filter. mock this too? Now i have this in test class:

@Test
public void searchAccountsByPartOfName() throws ParseException {
    service.searchAccounts("test");
    verify(accountDao, times(1)).getAll();
}

How to solve "ValueError: Supported target types are: ('binary', 'multiclass'). Got 'multiclass-multioutput' instead." error in pyton 3.7

I want to create a program to do emotion classification using TF-IDF and SVM. Before classify the data, I have to split a dataset into data training and testing using stratified KFold. I've used the numpy array to store the texts (X) and labels (Y)

But it ended up with this error :

'ValueError: Supported target types are: ('binary', 'multiclass'). Got 'multiclass-multioutput' instead.'


this code is running on python 3.7

this is my code :

labels = []

with open(path, encoding='utf-8') as in_file:
    data = csv.reader(in_file)
    for line in data:
        labels.append(line[1])

label_np = np.array(labels)
lp = label_np.reshape(20,20)
# lp = label_np.transpose(0)
# print(lp)

result_preprocess_np = np.array(result_preprocess)
hp = result_preprocess_np.reshape(20,20)

model = LinearSVC(multi_class='crammer_singer')
total_svm = []

total_mat_svm = np.zeros((20,20))

kf = StratifiedKFold(n_splits=3)
kf.get_n_splits(hp, lp)

for train_index, test_index in kf.split(hp, lp):
    # print('Train : ', test_index, 'Test : ', test_index)
    x_train, x_test = hp[train_index], hp[test_index]
    y_train, y_test = lp[train_index], lp[test_index]


vectorizer = TfidfVectorizer(min_df=5,
                             max_df=0.8,
                             sublinear_tf=True,
                             use_idf=True)
train_vector = vectorizer.fit_transform(x_train)
test_vector = vectorizer.transform(x_test)

model.fit(x_train, y_train)
result_svm = model.score(x_test, y_test)
print(result_svm)

I expect the result is the accuracy of classification.

How to Unit test runtime-specific interface implementations with minimal duplication

I have a Xamarin solution in VS2017, which implements a physical device a few times (over usb/serial and bluetooth) on Windows UWP and Android. These devices all implement IMyDevice, which defines a simple state machine that models MyDevice being connected or not, and measuring or not.

I test these implementations using Xunit, in their respective namespaces, and since they all implement IMyDevice, I get quite a few tests that are literally identical, and currently repeated 3 times. I will soon add more devices, and must find a way to prevent this repetition from getting completely out of hand.

To test my view models, I build an assembly in a separate, common namespace and include it in my various unit test projects, using:

AddTestAssembly(typeof(MyViewModel).GetTypeInfo().Assembly);

This works for view models because I don't need any platform specific types, but alas, I don't think I'll be able to pass a MyDevice implementation to an assembly that is already built!

A more feasible approach would be to use Xamarin's dependency service

DependencyService.Get<MyDevice>();

However, it seems conceptually wrong to me that my Serial/Bluetooth unit tests should require Xamarin forms. And I'm not sure I'd be done anyway because I'd still need to disambiguate my serial and Bluetooth device on windows.

Therefore I wrote it out long-hand. I was Using Xuint in two projects/namespaces like:

namespace Test.MyApp.UWP {
    public class TestMyBluetoothDeviceUWP {
        [Fact]
        void DeviceDoesntSuck() {
            Assert.False(new MyUWPBluetoothDevice().sucks());
        }
    }
    public class TestMySerialDeviceUWP {
        [Fact]
        void DeviceDoesntSuck() {
            Assert.False(new MyUWPSerialDevice().sucks());
        }
    }
}

namespace Test.MyApp.Droid{
    public class TestMyDeviceDroid {
        [Fact]
        void DeviceDoesntSuck() {
            Assert.False(new MyDeviceDroid().sucks());
        }
    }
}

The popular answer to questions similar to mine seems to be using a [Theory]

namespace Test.MyApp{
    public class TestMyDevice {
        [Theory]
        [ClassData new MyUWPBluetoothDevice()]
        [ClassData new MyUWPSerialDevice()]
        [ClassData new MyDeviceDroid()]
        void DeviceDoesntSuck(IMyDevice myDevice) {
            Assert.False(myDevice.sucks());
        }
    }
}

this helps with devices defined within the same namespace, however it does not help with the repetition across test projects (namespaces)

so far the best solution I can think of (which is not that great) is to define the tests statically, and wrap them in callers in the test projects

namespace Test.MyApp{
    public static class TestMyDevice {
        static void DeviceDoesntSuck(IMyDevice myDevice) {
            Assert.False(myDevice.sucks());
        }
    }
}

namespace Test.MyApp.UWP {
    public class TestMyDeviceUWP {
        [Theory]
        [ClassData new MyUWPBluetoothDevice()]
        [ClassData new MyUWPSerialDevice()]
        void DeviceDoesntSuck(IMyDevice myDevice) {
            Test.MyApp.TestMyDevice.DeviceDoesntSuck(myDevice);
        }
    }
}

namespace Test.MyApp.Droid{
    public class TestMyDeviceDroid {
        [Fact]
        void DeviceDoesntSuck() {
            Test.MyApp.TestMyDevice.DeviceDoesntSuck(new MyDeviceDroid());
        }
    }
}

this is more maintainable since I only have one implementation of each test, but it is still a lot of duplication since I have to wrap each one on each platform

In summary, if anyone knows how to: run a whole battery of tests against different runtime-dependant implementations of an interface that live in different projects/namespaces, I'd be very interested to hear what you have to say.

React-redux unit testing: testing a method inside class component that its only function is to dispatch an action to the store

I'm unit testing my book-list component in my React-redux app, basically it shows a list of saved books that the user has, by mapping an array of books into a BookCard component, which has two buttons: modify book and delete book. I've done all my tests successfully except for the delete book method, this method only dispatches an action to the store to delete a book from the state (haven't done the HTTP request to the backend functionality yet), the BookList component is a connected component. My problem is: I can't delete the BookCard from the Wrapper/Instance/MockStore, even though in the app itself it works. So I don't know if is excessive to test this button, because I've already tested the action and reducer for this purpose.

I've tried a lot of things, executing the deleteBook method from the wrapper with the mount() and shallow() functions (doing the respective dives() with shallow()) from enzyme, then testing if the selector for the BookCard exists (and yes, it still exists), then I tried to instance the component with shallow().instance(), but the same thing, it doesn't deletes. And finally I tried testing if the book was being deleted from the mockStore I created specially for that test (using store.getState()), and no, it didn't delete the book

// State and store 

let state = {
  dataState:
  {
    books: {
      'Random,Jane Doe':
      {
        name: 'Random',
        author: 'Jane Doe',
        keyTakeaways: [{ 'value': 'Nice!' }]
      }
   },
    users: {
      1: {
        userName: 'Jon Doe',
        email: 'jondoe@gmail.com'
      }
    }
  },
  uiState: {
    currentUser: 1
  }
};
let initialStore = configureMockStore(state);

// Test

it('Deletes books properly', () => {
      const name = state.dataState.books['Random,Jane Doe'].name;
      const author = state.dataState.books['Random,Jane Doe'].author;
      bookListWrapper = shallow(<BookList store={initialStore} />);
      bookListWrapper.dive().dive().instance().deleteBook(name, author);
      expect(initialStore.getState()).toEqual(true);
});

// Book Card component return statement

return <>
      <Card id={`card-${id}`} key={id} style=>
        <ButtonBase id={id} style= onClick={this.openDetails}>
          <div className={classes.content}>
            <div className={classes.info}>
              <div>
                <Typography className={classes.title} color="textSecondary">Book's name</Typography>
                <Typography className="book-name" color="textPrimary">{book.name}</Typography>
              </div>
              <div>
                <Typography className={classes.title} color="textSecondary">Author's name</Typography>
                <Typography className="author" color="textPrimary">{book.author}</Typography>
              </div>
            </div>
            <div>
              <Typography style= color='textSecondary'>Key Takeaways</Typography>
              <div className={classes.column}>
                { //Maps the books.keyTakeaways array to show each corresponding value
                  book.keyTakeaways.map((keyTakeaways, idx) => (
                    keyTakeaways.value !== ''
                      ? <p id={`key-takeaway-${idx}`} className={classes.list} key={idx}>{idx + 1}. {keyTakeaways.value}</p>
                  : <Typography key={idx} align="center" color="textPrimary">No learnings or notes for this book</Typography> // In case no Key Takeaways were added
                  ))
                }
              </div>
            </div>
          </div>
        </ButtonBase>
        <div style=>
          <button className={classes.button} id={`delete-${id}`} type="button" onClick={this.deleteBook}>Delete</button>
          <button className={classes.button} id={`modify-${id}`} type="button" onClick={this.modifyBook}>Modify</button>
        </div>
      </Card >
      {details}
    </>

// Delete book method from child (BookCard) component
deleteBook(event) {
    event.preventDefault();
    const { author, name } = this.state; 
    this.props.deleteBook(name, author); // Sends data to parent
}

// Delete book method from parent (BookList) component
deleteBook(name, author) {
    this.props.deleteBook(name, author); // Dispatch action to store
}

The results are that no card is being deleted from the store or component, when it should be because I'm executing the deleteBook method and dispatching the action to the store. Is worth mentioning that I'm using redux-mock-store package to create a mock store for the tests

React Apollo Error: No more mocked responses for the query: mutation

Intended outcome:

MockedProvider should mock my createPost mutation.

Actual outcome:

Error: No more mocked responses for the query: mutation...

How to reproduce the issue:

I have a very simple repository. I also created a separate branch with example commit which is breaking the apollo mock provider.

1) Mutation definition is here: https://github.com/developer239/react-apollo-graphql/blob/create-post-integration-tests/src/modules/blog/gql.js#L23

export const CREATE_POST = gql`
  mutation createPost($title: String!, $text: String!) {
    createPost(title: $title, text: $text) {
      id
      title
      text
    }
  }
`

2) The fake request is here: https://github.com/developer239/react-apollo-graphql/blob/create-post-integration-tests/test/utils/gql-posts.js#L68

export const fakeCreatePostSuccess = {
  request: {
    query: CREATE_POST,
    variables: {
      title: 'Mock Title',
      text: 'Mock lorem ipsum text. And another paragraph.',
    }
  },
  result: {
    data: {
      createPost: {
        id: '1',
        title: 'Mock Title',
        text: 'Mock lorem ipsum text. And another paragraph.',
      },
    },
  },

3) The component that I am testing lives here: https://github.com/developer239/react-apollo-graphql/blob/create-post-integration-tests/src/pages/Blog/PostCreate/index.js#L24

    <Mutation
      mutation={CREATE_POST}
      update={updatePostCache}
      onCompleted={({ createPost: { id } }) => push(`/posts/${id}`)}
    >
      {mutate => (
        <>
          <H2>Create New Post</H2>
          <PostForm submit={values => mutate({ variables: values })} />
        </>
      )}
    </Mutation>

4) The failing test case lives here: https://github.com/developer239/react-apollo-graphql/blob/create-post-integration-tests/src/pages/Blog/PostCreate/index.test.js#L33

  describe('on form submit', () => {
    it('should handle success', async () => {
      const renderer = renderApp(<App />, ROUTE_PATHS.createPost, [
        fakeCreatePostSuccess,
      ])
      const { formSubmitButton } = fillCreatePostForm(renderer)
      fireEvent.click(formSubmitButton)
      await waitForElement(() => renderer.getByTestId(POST_DETAIL_TEST_ID))
      expect(renderer.getByTestId(POST_DETAIL_TEST_ID)).toBeTruthy()
    })
  })

It seems that I followed all steps from the official documentation but I still can't make this work. Do you have any suggestions? 🙂

Proper way to write integration test for Laravel package

So, I need to test Laravel package API endpoints. I'm using Testbench.
Sounded simple enough. Run migrations, seed and make a request using guzzle.

BUT, this package depends on few other packages which all have migrations, not to mention a User model that comes with Laravel. They are all part of a larger application that acts as a dashboard.

First problem is running migrations since the packages are using loadMigrationsFrom() method and I guess their ServiceProviders are not read or executed when testing and Testbench is working differently from Laravel migration loader.

Manually calling artisan migrate before or after "main" migration doesn't work because they intertwine by timestamp.

My idea:
Only run unit tests in isolation.
Write integration test to be run as part of the "main app" installation when all models and migrations are present.

Does this make sense? Is there a better way to write and run these tests?

How to integrate a test report under vs code?

I'm currently performing unit tests and I would like to add my result tests in a test report. They are in C# and I use VS Code as IDE.

Before I used visual studio 2015 and I was able to create a nice test report with the "extent report" framework which works very well.

However, I would like to know if it's possible to have the same thing on VS Code please?

Here is an extract from "VS 2015" :

using AventStack.ExtentReports;
using AventStack.ExtentReports.Reporter.Configuration;
using AventStack.ExtentReports.Reporter;

.
..
...

public void TestCase1()
{
    ExtentTest test = null;

    try
    {
        test = extent.CreateTest("Test Case 1").Info("Test Started");
        test.Log(Status.Info, "Chrome Browser Launched");
        emailTextField = driver.FindElement(By.XPath(".//*[contains(@id,'email')]"));
        emailTextField.SendKeys(dataValue);
        test.Log(Status.Info, "Email id entered");
        driver.Quit();
        test.Log(Status.Pass, "Test1 Passed");

    }
    catch (Exception e)
    {
       throw;
    }
    finally
    {
        if(driver != null)
        {
            driver.Quit();
        }
    }
}

I would like to be able to reproduce this on VS Code, but it's difficult to find a real alternative on VS Code.

How to make Laravel's Notification throw an Exception in test

I have a few laravel commands that inherit from my own class to send slack messages if they fail. However if the slack notification fails I still want the original exception thrown so that the error still ends up in the logs if slack is unavailable or misconfigured. I have this and it works, but I can't figure out how to trigger an exception in the Notification part in tests.

namespace App\Support;

use App\Notifications\SlackNotification;
use Illuminate\Console\Command;
use Illuminate\Support\Facades\Notification;
use Symfony\Component\Console\Input\InputInterface;
use Symfony\Component\Console\Output\OutputInterface;

class CarrotCommand extends Command
{   
    protected function execute(InputInterface $input, OutputInterface $output)                  
    {
        try {
            return parent::execute($input, $output);
        } catch (\Exception $e) {
            try {
                Notification::route('slack', config('app.slack_webhook'))->notify(
                    new SlackNotification(
                        "Error\n" .                                     
                        get_class($e) . ': ' . $e->getMessage(),
                        'warning'
                    )
                ); 
            } catch (\Exception $exception) {

                // I want to reach this part in a test

                $this->error('Failed to send notice to slack after exception: ' . $exception->getMessage());
            }

            throw $e;
        }
    }
}

As Notification::route is defined on the facade I can't use Notification::shouldReceive to trigger the exception and the SlackNotification is newed up making it difficult to mock.

Any ideas as to how I can trigger an exception?

Is there a good way to host a MEAN app for testing purpose?

I need an environment for testing my mean application with multiple users. Is there any recommended way or service I can use?

Laravel testbench migration order

I'm trying to create a migration test with Testbench and I'm having trouble setting up migrations.

The test is for api responses of a package I created. This package in return depends on another package with migrations with are not published but are using loadMigrationsFrom() method in ServiceProvider.

This other package is creating tables and new column in users table. Now, in an actual Laravel app there is no problem since I guess it orders all migrations based on a date but when running a test I get an error saying users table does not exist.

Only solution I can think of is just publishing migrations from other package but I'd like to avoid that. Is there any other way?

I'm using DatabaseMigrations trait but I changed it to:

use DatabaseMigrations {
    runDatabaseMigrations as public parentRunDatabaseMigrations;
}    

public function runDatabaseMigrations()
{
    $this->app->afterResolving(
        function ($migrator) {
            $migrator->path(__DIR__.'/../../vendor/test/my-package/database/migrations');
        }
    );

    $this->parentRunDatabaseMigrations();
}

How to test previous version of application with new database version?

Let's say now I have my app deployed to production - v1 and it uses a database with version v1. Now I want to deploy a new version of app v2 with database version v2.

I suppose that database changes are backward-compatible, and the v1 app can work with v2 database. But I want to be sure that it works by running tests for the app v1 with database v2.

Is there any ready solutions/techniques for this?

Currently I have an automated pipeline before deploy that works like this:

  1. Run a new database container
  2. Populate database, do other related stuff...
  3. Run tests

But I don't have ideas how to automate previous app version testing.

dimanche 28 avril 2019

While running laravel phpunit testcase , i'm getting an error "Call to a member function connection() on null" in Model.php line 1249

i'm running laravel phpunit testcase, getting error in DB connection. DB connected successfully. But, "Call to a member function connection() on null" - Could not open to database connection server. Please update the configuration settings. I did all those things properly. Find below details,

"php" : ">=7.1.3", "laravel/lumen-framework": "5.8.", "phpunit/phpunit" : "^7.0", "phpunit/php-invoker" : "", "phpunit/dbunit" : "^4.0",

No Such Table when Running Django (2.0.13) Tests with SQLite3 Database

When attempting to run Django (version 2.0.13) automated testing a test isn throwing an operational error, "no such table". The table exists in my production database, and there exists migration code to create the table in the migration modules, however when attempting to create an instance of a model based on the table I get django.db.utils.OperationalError: no such table: test_table

The testSettings.py looks as such:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.sqlite3',
        'NAME': 'default',
        'TEST_NAME': 'default',
        'USER': 'u',
        'PASSWORD': 'p',
        'HOST': 'localhost',
        'PORT': '',
        'TIME_ZONE': 'Australia/Sydney'
    },
}

MIGRATION_MODULES = {
    ...
    'app': 'app.test.test_migration_app_0001_initial',
}

And it exists in the migration file for testing:

migrations.CreateModel(
  name='Test_Table',
  fields=[
    ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
    ('name', models.CharField(max_length=30, unique=True)),
  ],
),
...
migrations.AddField(
  model_name='test_subtable',
  name='test_table',
  field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='test', to='app.test_table'),
),

Of possible note, the database table is using the INNODB engine.

Script to run Java code multiple times and check number of tests passing

I have a Java class with many parameters like:

Public class DoesWork(){

  constructor (int j, int p, int seed) { ..}

  run() { .. }

}

I also have a test suite called Tests.java that has 15 tests that tests the above class under different scenarios.

One problem with the DoesWork class is that its behaviour changes for every different j, p, or seed parameters. So sometimes 10 tests may pass, sometimes 12, etc.

I wish to write a script (i will leave to run this overnight) that goes from 0 to say 100 for each of these parameters and runs the tests to find out the best parameters that will make the tests all pass (or the most pass).

How to go about writing such a script? Should I do it as a new java script or a bash script etc? Please help me with suggestions and/or some basic code to help me get started. thank you

Set react-scripts test environment file

We're using react-scripts to run our tests, via the command react-scripts test. Locally we have our own test environments, and use the .env.test file as the environment file (it uses this automatically).

However, in our CircleCI environment we want to run the tests using the development environment, i.e. using .env.development. I can't figure out how to tell react-scripts to use the dev env file, I've tried setting various values before running the command like REACT_APP_ENV and NODE_ENV. I've tried using the env-cmd module to specify it to use the dev env file (e.g. env-cmd .env.development react-scripts test), but it STILL uses the test one.

I know in the CircleCI test command I could just overwrite the test env file with the dev one (something like cp -rf .env.development .env.test && react-scripts test), but I was wondering if there was a cleaner way of doing it (since if I did it that way, if we wanted to test using dev environment locally, it'd overwrite our personal .env.test files)?

test_session (TestModelFn) Use cached_session instead. (deprecated)

I know that test_session is deprecated in tensorflow 1.13:

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use self.session() or self.cached_session() instead.

I created a unit test by only inheriting from tf.test.TestCase (which as I understand it is not deprecated?). I was careful not to explicitly call test_session:

class TestModelFn(tf.test.TestCase):

    def test_nothing(self):
        pass

However, when I run this test I see this:

test_nothing (the_thing.test.objective.cost_based.test_model.TestModelFn) ... ok
test_session (the_thing.test.objective.cost_based.test_model.TestModelFn)
Use cached_session instead. (deprecated) ... skipped 'Not a test.'

Why is this happening? I'm using tensorflow 1.13.1.

JEST testing, getting error on import of configure from enzyme

I'm trying to run my tests in repo here: https://github.com/Futuratum/moon.holdings

But I'm getting the following error

/Users/leongaban/projects/Futuratum/moon.holdings/jest.config.js:1 (function (exports, require, module, __filename, __dirname) { import { configure } from 'enzyme'

My tests use to work and I haven't changed anything, so curious as to what could be causing this problem?

My jest.config.js file looks correct:

import { configure } from 'enzyme'
import Adapter from 'enzyme-adapter-react-16'

configure({ adapter: new Adapter() })

package.json

"scripts": {
  "dev": "next -p 7777",
  "build": "next build",
  "start": "next -p 7777",
  "test": "NODE_ENV=test jest --watch --no-cache",
  "test-win": "SET NODE_ENV=test&& jest --watch"
},

enter image description here

What is the difference between `assertIn` and `assertContains` in Django?

I am studying some Django testing codes and I find assertIn and assertContains quite similar, I read the documentation wherein they didn't say anything about assertIn or maybe I couldn't find it.

This example below checks if 'john' appears at the content of self.email.body

self.assertIn('john', self.email.body)

similary this example checks if csrfmiddlewaretoken appears at content of self.response

self.assertContains(self.response, 'csrfmiddlewaretoken')

Looks like there syntax is different, but there functionality is the same. Hence, what is the difference?

If you could kindly help me understand this with some basic examples, I would really appreciate it.

Thank you so much

Flaky test using an embedded Mysql DB

I've written an integration test that uses a Mysql embedded database. When I execute the test class on my IDE, the tests executes successfully along with the other integration tests that appear in the same class. The issue comes in when I run all the tests together using a Gradle via command line.

The error I'm getting when running via command line looks something like this:

EmployeeRepositoryFindByEmployeeIdTest > shouldReturnDeparmentForEmployeeId() FAILED
    java.lang.AssertionError:
    Expecting:
      <[Employee(ud=118 age=23,department=null),
        Employee(id=219, age=24, department=Deparment(id=5, name=IT))]>
    to contain exactly (and in same order):
      <[Employee(ud=118 age=23,department=Deparment(id=4, name=IT)),
        Employee(id=219, age=24, department=Deparment(id=5, name=IT))]>
    but some elements were not found:
      <[Employee(ud=118 age=23,department=Deparment(id=4, name=IT))]>
    and others were not expected:
      <[Employee(ud=118 age=23,department=null)]>
        EmployeeRepositoryFindByEmployeeIdTest.shouldReturnDeparmentForEmployeeId(EmployeeRepositoryFindByEmployeeIdTest.java:188)

The test calls the following method:

@Repository
public class EmployeeRepository {    
  private final DSLContext dsl;

  public EmployeeRepository(DSLContext dsl, Clock clock) {
    this.dsl = dsl;
  }

  // ...

  public List<Employee> getEmployeeByDepartmentId(int departmentId) {
    return dsl.select(
        EMPLOYEE.ID,
        EMPLOYEE.AGE,
        EMPLOYEE.DEPARMENT_ID,
        DEPARTMENT.ID,
        DEPARTMENT.NAME
    ).from(EMPLOYEE)
        .leftJoin(DEPARMENT).on(DEPARMENT.ID.eq(EMPLOYEE.DEPARMENT_ID))
        .where(DEPARMENT.ID.eq(departmentId))
        .orderBy(EMPLOYEE.ID)
        .fetch()
        .map(this::getEmployee);
  }
}

What is expected after calling the test method is to receive two employees and the department associated for each but what I'm getting is always null for department_id=4. It feels to me that the join is not executed correctly for some of the rows in the table.

What I've tried to fix this is to increase wait_timeout and max_connections in the MySQL config to something very high but I still see the same issue. Also, I upgraded to latest db version -Version.v5_7_latest- but still the same. I'm not sure where else to look at in order to fix this. Any ideas of how to move forward?

If this helps, here is the MySQL config in the test setup:

  @Bean
  public EmbeddedMysql embeddedMySql() {
    if (embeddedMysql != null) {
      embeddedMysql.reloadSchema("deparment", Collections.emptyList());
      return embeddedMysql;
    } else {
      MysqldConfig config = aMysqldConfig(Version.v5_6_36)
          .withCharset(Charset.UTF8)
          .withPort(3322)
          .withUser("department", "*******")
          .withServerVariable("wait_timeout", "60")
          .withServerVariable("max_connections", "500")
          .build();
      embeddedMysql = anEmbeddedMysql(config)
          .addSchema("department")
          .start();
      return embeddedMysql;
    }
  }

Thanks in advance for the help!

MongoMemoryServer ; Unhandled Errors

I'm testing typescript express-mongoose app with jest,supertest, and mongo-memory-server. All tests are passing, but I'm getting this error on every test that uses mongo-memory-server.

import { MongoMemoryServer } from 'mongodb-memory-server';
let mongoServer: any;
jasmine.DEFAULT_TIMEOUT_INTERVAL = 600000;
describe('createUser', (): void => {
  let mongoServer: any;
  const opts = {}; 
  beforeAll(async () => {
    mongoServer = new MongoMemoryServer();
    const mongoUri = await mongoServer.getConnectionString();
    await mongoose.connect(mongoUri, opts, err => {});
  });

  afterAll(async () => {
    mongoose.disconnect();
    await mongoServer.stop();
  });

  it('creating user', async (): Promise<void> => {
    const email = 'test@mail.com';
    const username = 'testUser';
    const password = 'testPassword';
    const { userId } = await createUser(email, username, password);
    expect(userId).toBeTruthy();
    expect(typeof userId).toMatch('string');
  });
});
console.error node_modules/jest-jasmine2/build/jasmine/Env.js:289
   Unhandled error
    console.error node_modules/jest-jasmine2/build/jasmine/Env.js:290
      Error [ERR_UNHANDLED_ERROR]: Unhandled error. (MongoNetworkError: read ECONNRESET)
          at Connection.emit (events.js:178:17)
          at Socket.<anonymous> (C:\Users\PC\Desktop\typescript-mern-budget-app\server\node_modules\mongodb-core\lib\connection\connection.js:321:10)
          at Object.onceWrapper (events.js:277:13)
          at Socket.emit (events.js:189:13)
          at emitErrorNT (internal/streams/destroy.js:82:8)
          at emitErrorAndCloseNT (internal/streams/destroy.js:50:3)
          at process._tickCallback (internal/process/next_tick.js:63:19)

How to use two presets with jest:ts-jest and @shelf/jest-mongodb

I found a solution for using two presets but it didnt work with ts-jest and @shelf/jest-mongodb.

'new' operator log warning when creating a GameObject, how to get around it?

Im wondering about something I am doing that is causing this well known warning:

You are trying to create a MonoBehaviour using the 'new' keyword. This is not allowed. MonoBehaviours can only be added using AddComponent(). Alternatively, your script can inherit from ScriptableObject or no base class at all.

I know that I can simply use the Addcomponent<T> to avoid that logwarning. I also understand that unity does not allow anything inheriting from monobehaviour to be instantiated (ref).

It should be said however that right now the tests providing this logwarning are all working fine, I just want to get rid of the warning.

This is an example of a test that gives this error.

[Test]
public void HandleHoverSetsCurrentInteractableOnInteractor()
{
    /// Arrange
    GameObject actor;
    actor = new GameObject();
    actor.AddComponent<XRController>();

    GameObject actable;
    actable = new GameObject();
    Handle h = actable.AddComponent<Handle>();
    h.Hover(actor.GetComponent<IActor>());

    /// Assert
    Assert.IsNotNull(actor.GetComponent<IActor>().currentInteractable);
}

Now as mentioned earlier this works exactly as I would expect and it is creating the GameObject and testing the Hover functionality. It's just giving me that logwarning every time I do this. I am therefore wondering if there is a correct way of doing this? preferably without the logwarning.

What do each of the symbols next to the test cases mean?

I'm sorry as I know this is something I should be able to figure out by myself but I really don't even know how to correctly search for the answer, I gave up after an hour of trying to put into the right words with no results.

Anyways so basically I'm doing an assignment and we are given tests to check our code, I'm just not really sure what the symbols next to my tests mean?

https://gyazo.com/204ccaa57684fd8571989da6182a11b6

Obviously testPGCD has failed and testSimplified, testIsConstant, testGetConstant have passed with no issues.

Now here is the grey area for me:

  1. what does the blue box with the cross in them mean? (testAdd, testDifferentiate)
  2. what does no box at all mean? (the last 4 tests on the list)
  3. what does the blue triangle (play button I think?) mean?

Again sorry for something so simple but I'm really lost!

samedi 27 avril 2019

Jest and mongodb-memory-server : Unhandled errors

I'm testing typescript express-mongoose app with jest and MongoDB-memory-server.All my tests are running properly, but im getting jasmine unhandled errors and I cant find the problem.

let mongoServer: any;
jasmine.DEFAULT_TIMEOUT_INTERVAL = 600000;
jest.mock('../../services/sendEmailConfirmation');
describe('/auth', (): void => {
  let mongoServer: any;
  const opts = {}; // remove this option if you use mongoose 5 and above

  beforeAll(async () => {
    mongoServer = new MongoMemoryServer();
    const mongoUri = await mongoServer.getConnectionString();
    await mongoose.connect(mongoUri, opts, err => {});
  });

  afterAll(async () => {
    mongoose.disconnect();
    await mongoServer.stop();
  });

  describe('auth', (): void => {
    describe('/sign-up', (): void => {
      it('signing up new user', async (): Promise<void> => {
        const response = await request(app)
          .post('/auth/sign-up')
          .send({
            email: 'test@mail.com',
            username: 'testUserName',
            password: '123456789123',
            matchPassword: '123456789123',
          });
        expect(response.status).toEqual(200);
        expect(sendEmailConfirmation).toHaveBeenCalledTimes(1);
        expect(response.body).toEqual({ message: 'User created!' });
      });
    });
  });
});

And the error that this code produces

console.error node_modules/jest-jasmine2/build/jasmine/Env.js:289 Unhandled error console.error node_modules/jest-jasmine2/build/jasmine/Env.js:290 Error [ERR_UNHANDLED_ERROR]: Unhandled error. (MongoNetworkError: read ECONNRESET) at Connection.emit (events.js:178:17) at Socket. (C:\Users\PC\Desktop\typescript-mern-budget-app\server\node_modules\mongodb-core\lib\connection\connection.js:321:10) at Object.onceWrapper (events.js:277:13) at Socket.emit (events.js:189:13) at emitErrorNT (internal/streams/destroy.js:82:8) at emitErrorAndCloseNT (internal/streams/destroy.js:50:3) at process._tickCallback (internal/process/next_tick.js:63:19) console.error node_modules/jest-jasmine2/build/jasmine/Env.js:289 Unhandled error console.error node_modules/jest-jasmine2/build/jasmine/Env.js:290 Error [ERR_UNHANDLED_ERROR]: Unhandled error. (MongoNetworkError: read ECONNRESET) at Connection.emit (events.js:178:17) at Socket. (C:\Users\PC\Desktop\typescript-mern-budget-app\server\node_modules\mongodb-core\lib\connection\connection.js:321:10) at Object.onceWrapper (events.js:277:13) at Socket.emit (events.js:189:13) at emitErrorNT (internal/streams/destroy.js:82:8) at emitErrorAndCloseNT (internal/streams/destroy.js:50:3) at process._tickCallback (internal/process/next_tick.js:63:19)

Providing a list of data

This is the problem: Provided is a list of data about a store’s inventory where each item in the list represents the name of an item, how much is in stock, and how much it costs. Print out each item in the list with the same formatting, using the .format method (not string concatenation). For example, the first print statement should read The store has 12 shoes, each for 29.99 USD.

I continue to keep getting an error though stating there is an error in the code and that I still haven't tested the output.

Im trying to implement tests into django application but am getting errors. what am I doing wrong?

Im trying to implement some unit tests into my project but am getting errors. what am I doing wrong. the errors im getting are "File "C:\Users\phily\AppData\Local\Programs\Python\Python37\lib\site-packages\django\conf__init__.py", line 42, in _setup % (desc, ENVIRONMENT_VARIABLE)) django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings." Any help would be much apprecited

Transaction model below

class Transaction(models.Model):
    currency = models.CharField(max_length=20)
    amount = models.IntegerField()
    total_price = models.DecimalField(max_digits=8, decimal_places=2)
    date_purchased = models.DateTimeField()
    note = models.TextField(default="")
    owner = models.ForeignKey(User, on_delete=models.CASCADE)
    amount_per_coin = models.DecimalField(max_digits=8, decimal_places=2, editable=False)


    def save(self, *args, **kwargs):
        self.amount_per_coin = self.total_price / self.amount
        super(Transaction, self).save(*args, **kwargs)

    def __str__(self):
        return str(self.pk)+','+self.currency + ', '+str(self.amount)

    def get_absolute_url(self):
        return reverse('transaction-detail', kwargs={'pk': self.pk})

    @property
    def coins_remaining(self):
        return self.amount - sum(self.sales.all().values_list('amount_sold', flat=True))

    @property
    def coin_value(self):
        return get_currency_price(self.currency)

    @property
    def total_value(self):
        value = self.coin_value * self.amount
        return round(value, 2)

    @property
    def profit_loss(self):
        value = float(self.total_value) - float(self.total_price)
        return round(value, 2)

    @property
    def profit_loss_percent(self):
        value = ((float(self.total_value) - float(self.total_price))/self.total_value)*100
        return round(value, 1)

    def get_absolute_profit_loss(self):
        return abs(self.profit_loss)

    def get_absolute_profit_loss_percent(self):
        return abs(self.profit_loss_percent)


Sales model below

class Sale(models.Model):
    amount_sold = models.IntegerField()
    total_price_sold = models.DecimalField(max_digits=8, decimal_places=2)
    date_sold = models.DateTimeField(default=timezone.now)
    note = models.TextField(default="")
    transaction = models.ForeignKey(Transaction, on_delete=models.CASCADE, related_name="sales")
    amount_per_coin_sold = models.DecimalField(max_digits=8, decimal_places=2, editable=False)

    def __str__(self):
        return str(self.pk)+','+str(self.amount_sold) + ', '+self.note

    def save(self, *args, **kwargs):
        self.amount_per_coin_sold = self.total_price_sold / self.amount_sold
        super(Sale, self).save(*args, **kwargs)

    def get_absolute_url(self):
        return reverse('sale-detail', kwargs={'pk': self.pk})

    @property
    def profit_loss(self):
        return (self.amount_per_coin_sold - self.transaction.amount_per_coin) * self.amount_sold

    @property
    def profit_loss_percent(self):
        value = ((self.total_price_sold - (self.transaction.amount_per_coin * self.amount_sold))/self.total_price_sold) * 100
        return round(value, 1)

    def get_absolute_profit_loss(self):
        return abs(self.profit_loss)

    def get_absolute_profit_loss_percent(self):
        return abs(self.profit_loss_percent)


Test below

from django.test import TestCase
from .models import Transaction
import datetime
import os

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "cryptopredict.settings")


class TransactionTestCase(TestCase):
    def setUp(self):
        Transaction.objects.create(currency="BTC", amount=2, total_price=5000, date_purchased=datetime.datetime.now(),
                                   note="note", owner=1)
        Transaction.objects.create(currency="BTC", amount=2, total_price=6000, date_purchased=datetime.datetime.now(),
                                   note="note", owner=1)

    def test_animals_can_speak(self):
        """Should return price for single coin"""
        t1 = Transaction.objects.get(total_price=5000)
        t2 = Transaction.objects.get(total_price=6000)
        self.assertEqual(t1.amount_per_coin, 2500)
        self.assertEqual(t2.amount_per_coin, 2000)


dial tcp: Protocol not available go webassembly test

Attempting to go test a web-assembly function which fires an POST request.

Receive the following error:

firePing_test.go:40: ERROR ON POST REQUEST: Post https://not-the-real-api.execute-api.us-east-1.amazonaws.com/testing: dial tcp: Protocol not available

Running: Ubuntu 18.04.2 LTS go version go1.12.2 linux/amd64

I've tested that the function is valid and will send a request when executing in chrome. Test also passes when compiled for linux/amd64.

Problem Function:

// FirePing fires a ping
func FirePing(protocol *string, domain *string, params *map[string]string) (*http.Response, error) {

    // Marshal map into POST request body
    reqBody, err := json.Marshal(*params)
    if err != nil {
        return  nil, fmt.Errorf("ERROR ON MARSHAL OF PARAMS: %v", err)
    }

    // Send POST request
    req, err := http.NewRequest("POST", *protocol + "://" + *domain, bytes.NewBuffer(reqBody))
    if err != nil {
        return  nil, fmt.Errorf("ERROR ON FORMING REQUEST: %v", err)
    }
    client := &http.Client{}
    resp, err := client.Do(req)
    if err != nil {
        return nil,fmt.Errorf("ERROR ON POST REQUEST: %v",err)
    }

    return resp, nil
}

Problem Test Function Call:

// FirePing and receive response
    resp, err := FirePing(&config.Config.Protocol, &config.Config.Domain, &m)
    if err != nil {
        t.Error(err)
        return
    }

Should pass this test case as it performs the function call fine in the browser.

Only other place I have seen this is:

http.Get returns Protocol not available error

Which seams to be from playground disabling tcp connections. I am running this test locally