mercredi 30 septembre 2020

Moving from UFT to Open Source

I am looking for an option, to move from UFT automation to Open-source. We have years of scripts created in UFT and want to understand how and what should I best to use to mimic scenario as:

  1. Creating a folder for the runtime.
  2. Reading in a CSV file in which each line represents a task it needs to do before moving forward. These tasks will be as Executing SQL scripts, Reading Data from oracle DB, calling functions from DB. Mainly working with Oracle DB.
  3. Do a summary report as a CSV file, better in excel (to keep the same behavior).

What product should I use and how easy would be to implement scenarios as described above? I am interested to know what are the most commonly used.

Regards

Python Radish BDD - Using @when, @then & @and VERSUS @step only

Let's say we have 2 situations 1.) use @when, @then & @and tag in python file to detect the corresponding sentences from a feature file. 2.) use @step in python file to detect the corresponding sentences from the same feature file as above.

Which of the above mentioned situations will be faster while testing the feature file ? & Why ?

How to call a java private static method from feature file using karate Framework

I have a Java class that has private methods. I want to call those methods from the karate feature file.

I'm trying to call like * def myWork = Java.type('package name'); and using myWork calling the private

static method. Its erroring out with TypeError Not a method

Unexpected results in testing a stateful React component with hooks?? (Jest)

I am new to testing, creating simple component with counter state and useState react hook that increment the counter when a button is clicked:

Component:

const Counter= () => {
    const[counter, setCounter] = useState(0);

    const handleClick=() => {
        setCounter(counter + 1);
    }

    return (
        <div>
            <h2>{counter}</h2>
            <button onClick={handleClick} id="button">increment</button>
        </div>
    )
}

Counter.test.js:

it('increment counter correctly', () => {
    let wrapper = shallow(<Counter/>);
    const counter  = wrapper.find('h2');
    const button = wrapper.find('button');
    button.simulate('click')
    console.log(counter.text()) // 0
})

Logging counter.text() after simulating button click print 0 instead of 1; and when i tried to spy on useState, i got the same problem:

it('increment counter correctlry', () => {
    let wrapper = shallow(<Card />);
    const setState = jest.fn();
    const useStateSpy = jest.spyOn(React, 'useState');

    useStateSpy.mockImplementation((init) => [init, setState]);
     const button = wrapper.find("button")
     button.simulate('click');
     expect(setState).toHaveBeenCalledWith(1);
})

This test fails and i get this error after running the test:

Expected: 1
Number of calls: 0

What am i doing wrong??

Test an asynchronous action with thunk and jest

I can't test an asynchronous action that works with thunk, could one tell me what I'm doing wrong or how could I do it?

File containing the action I want to test: technology.js (action)

import { types } from "../types/types";
import swal from "sweetalert";

export const startFetchTechnologies = () => {
  return async (dispatch) => {
    try {
      dispatch(startLoading());
      const res = await fetch(
        "http://url.com/techs"
      );
      const data = await res.json();
      dispatch(loadTechnologies(data));
    } catch (error) {
      await swal("Error", "An error has occurred", "error");
    }
    dispatch(finishLoading());
  };
};
    
export const loadTechnologies = (data) => ({
  type: types.loadTechnologies,
  payload: data,
});

export const startLoading = () => ({
  type: types.startLoadTechnologies,
});


export const finishLoading = () => ({
  type: types.endLoadTechnologies,
});

File containing the tests I want to perform: technology.test.js (test)

import { startFetchTechnologies } from "../../actions/technology";
import { types } from "../../types/types";

import configureMockStore from "redux-mock-store";
import thunk from "redux-thunk";
import fetchMock from "fetch-mock";
import expect from "expect"; // You can use any testing library

const middlewares = [thunk];
const mockStore = configureMockStore(middlewares);

describe("startFetchTechnologies", () => {
  afterEach(() => {
    fetchMock.restore();
  });

  beforeEach(() => {
    jest.setTimeout(10000);
  });

  test("startFetchTechnologies", () => {

    // fetchMock.getOnce("/todos", {
    //   body: { todos: ["do something"] },
    //   headers: { "content-type": "application/json" },
    // });

    const expectedActions = [
      { type: types.startLoadTechnologies },
      { type: types.loadTechnologies, payload: "asd" },
      { type: types.endLoadTechnologies },
    ];
    const store = mockStore({});

    return store.dispatch(startFetchTechnologies()).then(() => {
      // return of async actions
      expect(store.getActions()).toEqual(expectedActions);
    });
  });
});

The console outputs the following:

 FAIL  src/__test__/actions/technology.test.js (11.407 s)
  startFetchTechnologies
    ✕ startFetchTechnologies (10029 ms)

  ● startFetchTechnologies › startFetchTechnologies

    : Timeout - Async callback was not invoked within the 10000 ms timeout specified by jest.setTimeout.Timeout - Async callback was not invoked within the 10000 ms timeout specified by jest.setTimeout.Error:

      19 |   });
      20 | 
    > 21 |   test("startFetchTechnologies", () => {
         |   ^
      22 | 
      23 |     // fetchMock.getOnce("/todos", {
      24 |     //   body: { todos: ["do something"] },

I have tried increasing the timeout to 30000 and the test keeps failing.

I hope you can help me!

AngularJS - Jasmine - Cannot test Service in Controller callback function

I need help with Jasmine test. I have this controller with a simple service:

angular.module('${project.name}')
  .controller('MyControllerCtrl', function ($scope, InvocationService) {
    $scope.vm = {};
    var vm = $scope.vm; 
    vm.idResponse="";
    
    vm.invocacion_get = function(){
        var successCallback = function (response){
               vm.idResponse = response.data.totalElements;
               //More complex logic here that must tested, but no....
        }

        var errorCallback = function(error){
            vm.idResponse = error;
        }   
        
        var url = "prefernces/details?id=1";
        InvocationService.get(url, successCallback, errorCallback);
    } 
})

.factory('InvocationService', function($q) {
  var factory = {};           
  factory.get = function(url,successCallback, errorCallback) {
       var deferred = $q.defer();              
       var allData = $http.get("/risam-backend/rest/"+url).then(successCallback, errorCallback)
       deferred.resolve(allData);
       return deferred.promise;
     }
    return factory;
  });

The test code is (work fine):

describe('Testing a controller', function() {
      var $scope, ctrl;
      var serviceMock;
      
      beforeEach(function (){
        serviceMock = jasmine.createSpyObj('InvocationService', ['get']);
        module('${project.name}');          

        inject(function($rootScope, $controller, $q, _$timeout_) {
          $scope = $rootScope.$new();
          ctrl = $controller('MyControllerCtrl', {
            $scope: $scope,
            InvocationService : serviceMock        
          });
        });
      });

      it('test method vm.invocacion_get()', function (){
        $scope.vm.invocacion_get();
        expect(serviceMock.get).toHaveBeenCalled(); //This work ok
      });
    });

Whit this test only can check that metod InvocationService.get is called, it's ok.

But I need test the logic of functions 'successCallback' and 'errorCallback' (See vm.invocacion_get() function). This functions is called after 'InvocationService.get' is invoqued, but i this test cannot see anything change. It does not matter if the data returned by the service is simulated; the idea is that the variable 'vm.idResponse' changes value.

Searching the web I saw some examples but couldn't get them to work, sorry.

Thanks!

Can anyone help me get 100% code coverage with this code? Simply checking whether the function has been called will also do

I have no idea how to access the remaining line of code. I simply need to get a full code coverage. Checking if the function has been called (toHaveBeenCalled() ) will also do.

Pasting in the TypeScript file and my Spec file. My current code-coverage is as follows:

TOTAL: 2 FAILED, 3 SUCCESS

================== Coverage summary ==================

Statements : 30.34% (27/89)

Branches : 0% (0/22)

Functions : 26.92% (7/26)

Lines : 29.07% (25/86)

=====================================================

update-notification.component.ts

import { Component, OnInit } from '@angular/core';
import { FormGroup, FormControl, Validators } from '@angular/forms';
import * as ClassicEditor from '@ckeditor/ckeditor5-build-classic';
import { NotificationService } from '../notification.service';
import { ActivatedRoute, Router } from '@angular/router';
import { ToastrService } from 'ngx-toastr';

@Component({
  selector: 'app-update-notification',
  templateUrl: './update-notification.component.html',
  styleUrls: ['./update-notification.component.scss']
})
export class UpdateNotificationComponent implements OnInit {
  notificationId: any;
  notificationDetails: any;
  parentNotifications: any[];
  editorInstance = ClassicEditor;
  ckEditorConfig = {
    placeholder: 'Add notification description'
  }
  editNotificationGroup = new FormGroup({
    title: new FormControl('', Validators.required),
    parentNotification: new FormControl(),
    description: new FormControl()
  })
  constructor(
    private activeRoute: ActivatedRoute,
    public notificationService: NotificationService,
    private router: Router,
    private toastr: ToastrService
  ) { }

  ngOnInit() {
    this.getParentNotificationsForDropdown();
    this.notificationId = this.activeRoute.snapshot.params.id;
    this.notificationService.getNotificationDetailsById(this.notificationId).
      subscribe((res: any) => {
        this.notificationDetails = res[0];
        this.editNotificationGroup.get('title').setValue(this.notificationDetails.notification_title);
        this.editNotificationGroup.get('description').setValue(this.notificationDetails.notification_message);
        this.editNotificationGroup.get('parentNotification').setValue(this.notificationDetails.linked_notification_id);
      }, error => {
        console.log(error);
        this.editNotificationGroup.get('title').setValue('title');
        this.editNotificationGroup.get('description').setValue('<strong>desc</strong> of test');
        this.editNotificationGroup.get('parentNotification').setValue('null');
      })
  }
  getParentNotificationsForDropdown() {
    this.notificationService.getTradeNotifications().
      subscribe((res: any) => {
        this.parentNotifications = res;
      }, error => {
        console.log('get parent notifications failed', error);
      })
  }

  submitEditNotification() {
    this.notificationService.updatedNotification(this.editNotificationGroup.value, this.notificationId).
      subscribe((res: any) => {
        console.log(res);
        this.toastr.success('Notification updated successfully', 'Notification Blast');
        this.router.navigate(['notifications/list']);
      }, error => {
        console.log(error);
      })
  }

  // if 400 then show error on popup, or else something went wrong or redirect on page

}

My test cases..

update-notificatoin.component.spec.ts

import { ComponentFixture, TestBed } from '@angular/core/testing';
import { NO_ERRORS_SCHEMA } from '@angular/core';
import { NotificationService } from '../notification.service';
import { ActivatedRoute } from '@angular/router';
import { Router } from '@angular/router';
import { ToastrService } from 'ngx-toastr';
import { ClassicEditor } from '@ckeditor/ckeditor5-build-classic';
import { UpdateNotificationComponent } from './update-notification.component';

describe('UpdateNotificationComponent', () => {
  let component: UpdateNotificationComponent;
  let fixture: ComponentFixture<UpdateNotificationComponent>;

  beforeEach(() => {
    const notificationServiceStub = () => ({
      getNotificationDetailsById: notificationId => ({ subscribe: f => f({}) }),
      getTradeNotifications: () => ({ subscribe: f => f({}) }),
      updatedNotification: (value, notificationId) => ({
        subscribe: f => f({})
      })
    });
    const activatedRouteStub = () => ({ snapshot: { params: { id: {} } } });
    const routerStub = () => ({ navigate: array => ({}) });
    const toastrServiceStub = () => ({ success: (string, string1) => ({}) });
    TestBed.configureTestingModule({
      schemas: [NO_ERRORS_SCHEMA],
      declarations: [UpdateNotificationComponent],
      providers: [
        { provide: NotificationService, useFactory: notificationServiceStub },
        { provide: ActivatedRoute, useFactory: activatedRouteStub },
        { provide: Router, useFactory: routerStub },
        { provide: ToastrService, useFactory: toastrServiceStub }
      ]
    });
    fixture = TestBed.createComponent(UpdateNotificationComponent);
    component = fixture.componentInstance;
  });

  it('can load instance', () => {
    expect(component).toBeTruthy();
  });

  it(`editorInstance has default value`, () => {
    expect(component.editorInstance).toEqual(ClassicEditor);
  });

  describe('ngOnInit', () => {
    it('makes expected calls', () => {
      const notificationServiceStub: NotificationService = fixture.debugElement.injector.get(
        NotificationService
      );
      spyOn(component, 'getParentNotificationsForDropdown').and.callThrough();
      spyOn(
        notificationServiceStub,
        'getNotificationDetailsById'
      ).and.callThrough();
      component.ngOnInit();
      expect(component.getParentNotificationsForDropdown).toHaveBeenCalled();
      expect(
        notificationServiceStub.getNotificationDetailsById
      ).toHaveBeenCalled();
    });
  });

  describe('getParentNotificationsForDropdown', () => {
    it('makes expected calls', () => {
      const notificationServiceStub: NotificationService = fixture.debugElement.injector.get(
        NotificationService
      );
      spyOn(notificationServiceStub, 'getTradeNotifications').and.callThrough();
      component.getParentNotificationsForDropdown();
      expect(notificationServiceStub.getTradeNotifications).toHaveBeenCalled();
    });
  });

  describe('submitEditNotification', () => {
    it('makes expected calls', () => {
      const notificationServiceStub: NotificationService = fixture.debugElement.injector.get(
        NotificationService
      );
      const routerStub: Router = fixture.debugElement.injector.get(Router);
      const toastrServiceStub: ToastrService = fixture.debugElement.injector.get(
        ToastrService
      );
      spyOn(notificationServiceStub, 'updatedNotification').and.callThrough();
      spyOn(routerStub, 'navigate').and.callThrough();
      spyOn(toastrServiceStub, 'success').and.callThrough();
      component.submitEditNotification();
      expect(notificationServiceStub.updatedNotification).toHaveBeenCalled();
      expect(routerStub.navigate).toHaveBeenCalled();
      expect(toastrServiceStub.success).toHaveBeenCalled();
    });
  });
});

enter image description here

PHP unit test - How to fake a vendor Guzzle request

I'm trying to make a test that involves 2 services on PHP.

The internal logic of my service 'A' needs to send a message to service "B". I do not have B configured, so it will always respond with error on a testing environment, however, i don't need to know what 'B' does, i just need to know that after 'B' gave a 200 response, the callback on service 'A' saves some data on my DB.

The communication between services is handled by a private vendor package that uses Guzzle under the hood. The problem i found, is that i cannot seem to be able to use Guzzle Mock or STack without touching the vendor's source code right?

So, is the best solution creating a fake api that always responds 200 to my requests on the same ip:port of the real "B" service??

Thanks!

Testcafe process fails to end when using reporters

I have observed that whenever I run my automated script along with reporter, the test runs through successfully but the testcafe process does not end. After closing the application browser the testcafe run window never closes. This happens only when run command includes reporter details like 'testcafe chrome RegressionTestHappyFlows/calculatePremium_ShortTermTravelInsurance.ts --env=Test_English_ET --reporter html:TestRun.html'.

If I remove the reporter details from the command then the testcafe process ends properly as expected. I have tried with testcafe-reporter-html(version : 1.4.6) and also with multiple-cucumber-html-reporter(version : 1.18.0). In both cases I face the same issue. Testcafe version that I am using is 1.9.1

Could some one please help and suggest me a solution.

Thanks

What to test and what to ignore in Jest test for a React Redux Action?

I have a Jest test file that tests an action. In the Jest file, I make sure that the action, loadUser(), dispatches all of its expected action types, USER_LOADED and AUTH_ERROR:

  describe('loadUser() action.', () => {
    test('dispatches USER_LOADED.', async () => {
      mockAxios.get.mockImplementationOnce(() =>
        Promise.resolve({
          data: {
            user: {
              name: 'test name',
              email: 'test@test.com',
              avatar: '',
              date: '',
            },
          },
        })
      );

      await store.dispatch(authActions.loadUser());

      const actions = store.getActions();
      const expectedActions = [
        {
          type: USER_LOADED,
          payload: {
            user: {
              name: 'test name',
              email: 'test@test.com',
              avatar: '',
              date: '',
            },
          },
        },
      ];
      expect(actions).toEqual(expectedActions);
    });

    test('dispatches AUTH_ERROR.', async () => {
      mockAxios.get.mockImplementationOnce(() =>
        Promise.reject({ err: 'Failed to load user.' })
      );

      await store.dispatch(authActions.loadUser());

      const actions = store.getActions();
      const expectedActions = [
        {
          type: AUTH_ERROR,
        },
      ];
      expect(actions).toEqual(expectedActions);
    });
  });

However, when I check the coverage report, the action's Branch coverage is not 100%. The following is the action file and action in question:

export const loadUser = () => async (dispatch) => {
  // This branch is not being tested. Should it be tested in my actions test?
  if (localStorage.token) {
    setAuthToken(localStorage.token);
  }

  try {
    const res = await axios.get('/api/auth');

    dispatch({
      type: USER_LOADED,
      payload: res.data,
    });
  } catch (err) {
    dispatch({
      type: AUTH_ERROR,
    });
  }
};

Should I be testing the branch if (localStorage.token) {} in my actions Jest test? If so, how should I test it? My current assumption is that my Redux action tests should make sure the correct action types are dispatched.

If I don't need to test that branch, is it fine to leave my coverage for branch at 50%?

Jenkins distrubte APK files

I have a Jenkins slave that build my android app. I'm planning on adding a Appium testing suite in the near future. This suite will not have access to the same network as the Jenkins slave are on. Is it possibile for Jenkins to distribute the APK and for my testing suite on the other network to download the apk that Jenkins have distributed without having access to that Jenkins?

How to automatically generate test files for existing javascript [closed]

I am looking for a simple javascript module that will analyze javascript code and create appropriate test files for it.

I feel like any time someone asks this question, the fact that they want automatic test generation is glossed-over.

I just want to know:

  • Which framework creates test files AUTOMATICALLY?
  • How? (example: run command X in folder Y)

Thanks!

Test don't run when i run maven test with appium java project

i would see and test appium for my project and i finish setup a new maven project with appium and cucumber, but i have this issue: when i run maven test i see ever zero test run and i don't understand where is the issue.

JDK installed on my local machine:JDK-15 path of jdk bin and maven bin is in my windows path variable

This is my POM file: https://github.com/silv3ri0/test-appium-java/blob/master/com.qa/pom.xml

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>qa.mobile</groupId>
  <artifactId>com.qa</artifactId>
  <version>0.0.1-SNAPSHOT</version>
  
  <properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <java.version>1.8</java.version>
    <maven.compiler.version>3.6.0</maven.compiler.version>
    <maven.compiler.source>7</maven.compiler.source>
    <maven.compiler.target>7</maven.compiler.target>
  </properties>
  
  <dependencies>
  
    <!-- https://mvnrepository.com/artifact/io.appium/java-client -->
    <dependency>
        <groupId>io.appium</groupId>
        <artifactId>java-client</artifactId>
        <version>7.3.0</version>
    </dependency>
    
    <!-- https://mvnrepository.com/artifact/org.testng/testng -->
    <dependency>
        <groupId>org.testng</groupId>
        <artifactId>testng</artifactId>
        <version>7.0.0</version>
        <scope>compile</scope>
    </dependency>
    
    <dependency>
        <groupId>io.cucumber</groupId>
        <artifactId>cucumber-java</artifactId>
        <version>6.7.0</version>
        <scope>test</scope>
    </dependency>
        
    <dependency>
        <groupId>io.cucumber</groupId>
        <artifactId>cucumber-junit</artifactId>
        <version>6.6.0</version>
        <scope>test</scope>
    </dependency>
    
    <!-- https://mvnrepository.com/artifact/junit/junit -->
    <dependency>
        <groupId>junit</groupId>
        <artifactId>junit</artifactId>
        <version>4.12</version>
        <scope>test</scope>
    </dependency>
    
    <!-- https://mvnrepository.com/artifact/io.cucumber/cucumber-testng -->
    <dependency>
        <groupId>io.cucumber</groupId>
        <artifactId>cucumber-testng</artifactId>
        <version>6.5.0</version>
    </dependency>
    
</dependencies>
  
  <build>
        <testResources>
            <testResource>
                <directory>src/test/java</directory>
                <excludes>
                    <exclude>**/*.java</exclude>
                </excludes>
            </testResource>
        </testResources>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>${maven.compiler.version}</version>
                <configuration>
                    <encoding>UTF-8</encoding>
                    <source>${java.version}</source>
                    <target>${java.version}</target>
                    <compilerArgument>-Werror</compilerArgument>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>2.22.2</version>
            </plugin>            
        </plugins>        
    </build>  
</project>

MyRunnerTestClass: https://github.com/silv3ri0/test-appium-java/blob/master/com.qa/src/test/java/com/qa/runners/MyRunnerTest.java

    package com.qa.runners;
    
    import org.testng.annotations.Test;
    
    import io.cucumber.junit.Cucumber;
    import io.cucumber.junit.CucumberOptions;
    import io.cucumber.junit.CucumberOptions.SnippetType;
    
    //@RunWith(Cucumber.class)
    @CucumberOptions(plugin = {"pretty", "html:target/cucumber", "summary"}
                                ,features = {"src/test/resources"}
                                ,glue = {"com.qa.stepdef"}/*tags = {"@appium"}*/)
   @Test
    public class MyRunnerTest {
    
    }

StedefClass: https://github.com/silv3ri0/test-appium-java/blob/master/com.qa/src/test/java/com/qa/stepdef/MissguidedStepDef.java

package com.qa.stepdef;

import java.net.URL;

import org.openqa.selenium.By;
import org.openqa.selenium.remote.DesiredCapabilities;
import org.testng.Assert;

import io.appium.java_client.AppiumDriver;
import io.appium.java_client.android.AndroidDriver;
import io.cucumber.java.en.Given;
import io.cucumber.java.en.Then;
import io.cucumber.java.en.When;
import utility.Hook;

public class MissguidedStepDef {
    
    private AppiumDriver driver;
    
    public MissguidedStepDef() {
        this.driver = (AppiumDriver) Hook.getDriver();
    }
    
    @Given("^i navigate the shop$")
    public void iNavigateTheShop() {
        // Write code here that turns the phrase above into concrete actions
        Assert.assertTrue(driver.findElement(By.xpath("//*[@text='Accessibility']")).isDisplayed());
        
    }

Hooks Class: https://github.com/silv3ri0/test-appium-java/blob/master/com.qa/src/test/java/utility/Hook.java

package utility;

import java.net.MalformedURLException;
import java.net.URL;
import java.util.concurrent.TimeUnit;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.remote.DesiredCapabilities;
import io.appium.java_client.AppiumDriver;
import io.appium.java_client.android.AndroidDriver;
import io.cucumber.java.After;
import io.cucumber.java.Before;

public class Hook {

    private static AppiumDriver driver;
    
    @Before
    public void setUpAppium() throws MalformedURLException
    {
        DesiredCapabilities desiredCapabilities = new DesiredCapabilities();
        desiredCapabilities.setCapability("platformName", "Android");
        desiredCapabilities.setCapability("platformVersion", "11");
        desiredCapabilities.setCapability("deviceName", "any device name");
        desiredCapabilities.setCapability("automationName", "UiAutomator2");
        desiredCapabilities.setCapability("avd", "Pixel_3_API_30");
        desiredCapabilities.setCapability("appPackage", "com.poqstudio.app.platform.missguided");
        desiredCapabilities.setCapability("appActivity", "com.poqstudio.app.platform.presentation.countrySwitcher.view.CountrySwitcherSelectCountryActivity");
        desiredCapabilities.setCapability("app", "C:\\projectappium/Missguided_v14.3.0.1_apkpure.com.apk");

        URL url = new URL("http://127.0.0.1:4723/wd/hub");

        driver = new AndroidDriver(url, desiredCapabilities);
        driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);
        
        
    }
    
    @After
    public void tearDown()
    {
        driver.quit();
    }
    
    public static WebDriver getDriver()
    {
        return driver;
    }}

Feature file: https://github.com/silv3ri0/test-appium-java/blob/master/com.qa/src/test/resources/AddToBag.feature

Feature: Test add to bag
Scenario: Show register form after add to bag
    Given i navigate the shop
    And select clearence category
    And select seven item from search result
    And add to bag item
    When go to the select pay 
    Then sign in is displayed

Can someone help me? For me for now it's important to verify at least first step of my feature file working fine.

Any idea about this issue?

Thanks in advance, Silverio

cypress and select2 not triggering on select

I'm new to cypress and working my way through various examples to support testing of my app.

I've got a select box which uses select2 - I can successfully select the item from the dropdown box but cypress is not then triggering the select event from which my code is running off.

How can I get Cypress to select the item and then trigger the select event?

Here's my test code:

cy.get('select[name=people] > option')
            .eq(3)
            .then(element => cy.get('select[name=people]').select(element.val(), {force:true}));

This succesfully selects the 3rd item in the drop down

I then have this code which takes actions based on the selection:

$('#people').on("select2:select", function (e) {
                var selectId = $(this).select2('data')[0].id;
                var castName = $(this).select2('data')[0].text;
                $('option:selected', this).prop('disabled', 'disabled');
                addCastTableRow(selectId, castName);
                initRemoveCastButton();
                $(this).select2();

            });

Any thoughts appreciated

Thank you

Postman Tests: There was an error in evaluating the test script

I am testing the application and face this error, I want to include this error in my script but I don't know-how.

This error happens because I write the wrong variable name on the body, and is also part of the process to test.

I want to do something like that:

pm.test("There was an error in evaluating the test script", function(){ pm.expect(??.??).to.eql("There was an error in evaluating the test script: JSONError: Unexpected token '<' at 1:1 <a href="javascript:void(0);" onclick="document.getElem ^"); });

How to test in app purchases in Flutter [IAP][testing]

I'm using the official in_app_purchase package from Google, but I can't find any information on how to test IAP in flutter after it's set up.

Android and iOS has there own guides, but they require you to add libraries and code manually which is not compatible with the way flutter autogenerates native code.

Is there a "right" way of testing in app purchases in flutter?

Cannot mock 0 rows result in jOOQ

I have a problem mocking an empty result with jOOQ (version 3.12.3). I use MockDataProvider, but it still returns one row. Here is the code of MockDataProvider:

        class DataProvider implements MockDataProvider {
            @Override
            public MockResult[] execute(MockExecuteContext ctx) {
                return new MockResult[]{new MockResult(0, null)};
            }
        }
        var dbContext = DSL.using(new MockConnection(new DataProvider()));
        //....

So it should return 0 rows, and that's what is needed. However, it actually returns 1 row with 0 as value.

12:51:34.799 [main] DEBUG org.jooq.tools.LoggerListener - Affected row(s)          : 0
12:51:34.863 [main] DEBUG org.jooq.tools.LoggerListener - Fetched result           : +--------+
12:51:34.863 [main] DEBUG org.jooq.tools.LoggerListener -                          : |metadata|
12:51:34.863 [main] DEBUG org.jooq.tools.LoggerListener -                          : +--------+
12:51:34.863 [main] DEBUG org.jooq.tools.LoggerListener -                          : |0       |
12:51:34.863 [main] DEBUG org.jooq.tools.LoggerListener -                          : +--------+
12:51:34.863 [main] DEBUG org.jooq.tools.LoggerListener - Fetched row(s)           : 1

Any help or advice is very appreciated.

PHPUnit Tests in PhpStorm failed in a strange way

I configured PhpStorm so that it can run PHPUnit tests for Drupal sites. In PhpStorm I clicked a button to run a test, then I got the message that the test failed. Of course, in a PhpStorm window I could see the detailed command line for running PHPUnit like following:

/usr/bin/php7.3 xxx/vendor/phpunit/phpunit/phpunit --bootstrap xxx/core/tests/bootstrap.php --configuration xxx/modules/phpunit.xml --teamcity

Then I copied the command line to a terminal in the OS (Ubuntu), and ran it, and guess what, it passed! So I am very confused with the fact, that the same command line will lead to different results in PhpStorm and in a terminal in Ubuntu.

Question one: what could be the reason ?

As I know, for running certain Drupal tests it restricts the user who runs the tests (see following post): https://www.drupal.org/docs/automated-testing/phpunit-in-drupal/running-phpunit-tests#s-permission-problems

When running the tests in a Ubuntu Terminal, the user is dev, with this user the tests are OK (because I've adjusted the server configuration according to the post above).

Question two: does PhpStorm takes an other username for running PHPUnit Tests? If so, what is that?

Why selenium can't find some text on page

I have some page and "tree menu". When I want to open tree_menu, I need to click on button. After, I make some steps and click on title, for example, I clicked "19,20", after I clicked "may" and after I clicked "2" and in the top of page (where i choose this tree_menu) I have a button/link. And text in this element with 2 space (not one):

"19,20 > may > 2"

But on display I think I see with 1 space, but in tag value with 2. And this ">" is text, but I don't know how it understands and sees selenium.

So, after I call my function for search this text:

    def isText(self, text):
        try:
            element = self.driver.find_element(By.XPATH, "//*[text()[contains(., \"" + text + "\")]]")
            return True
        except:
            return False

So, but selenium can't to find this text. And it could find only for "19,20", but space or next text - no. I opened web-code, but I didn't see new tag or something else.

But I tried find this button/link like element and check that text and it worked. This project work with dojo, but i can't to change anything.

Please, can you help me with this problem?

TY! Have a good day!

Eclipse(java testing) - terminate mutant version

I am a learner on the Coursera "Introduction to Software Testing" course. I am submitting my unit test assignment but the grader output on the Coursera site shows me this:

mutant1: -0.020833333333333332 *** Learner did not properly terminate mutant version! *** mutant10: -0.020833333333333332 *** Learner did not properly terminate mutant version! *** mutant15: -0.020833333333333332 *** Learner did not properly terminate mutant version! *** mutant18: -0.020833333333333332 *** Learner did not properly terminate mutant version! *** mutant2: -0.020833333333333332 *** Learner did not properly terminate mutant version! *** mutant21: -0.020833333333333332 *** Learner did not properly terminate mutant version! *** mutant23: -0.020833333333333332 *** Learner did not properly terminate mutant version! *** mutant25: -0.020833333333333332 *** Learner did not properly terminate mutant version! *** mutant3: -0.020833333333333332 *** Learner did not properly terminate mutant version! ***

Kindly help! I have to submit the assignment to get a certificate.

// Demo.java

import java.util.Scanner;

public class Demo {

    public static void main(String[] args) {
        // Reading from System.in
        Scanner reader = new Scanner(System.in);  
        
        System.out.println("Enter side 1: ");
        // Scans the next token of the input as an int.
        int side_1 = reader.nextInt();
        
        System.out.println("Enter side 2: ");
        // Scans the next token of the input as an int.
        int side_2 = reader.nextInt();
        
        System.out.println("Enter side 3: ");
        // Scans the next token of the input as an int.
        int side_3 = reader.nextInt();
        
        if (isTriangle(side_1, side_2, side_3)) {
            System.out.println("This is a triangle.");
        }
        else {
            System.out.println("This is not a triangle.");
        }
        
        reader.close();

    }
    
    public static boolean isTriangle(double a, double b, double c) {
        if ((a + b > c) &&
            (a + c > b) && // should be a + c > b
            (b + c > a)) {
            return true; 
        }
        return false;
    }

}

the testing code is:

import static org.junit.Assert.*;

import org.junit.Test;

public class DemoTest {

    @Test
    public void triangle_test_1() {
        assertTrue(Demo.isTriangle(3,2,4));
    }
    
    @Test
    public void triangle_test_2() {
        assertTrue(Demo.isTriangle(4,3,5));
    }
    
    @Test
    public void triangle_test_3() {
        assertTrue(Demo.isTriangle(5,3,5));
    }
    
    @Test
    public void triangle_test_4() {
        assertTrue(Demo.isTriangle(6,4,5));
    }
    
    @Test
    public void triangle_test_5() {
        assertTrue(Demo.isTriangle(7,5,6));
    }
    
    @Test
    public void triangle_test_6() {
        assertFalse(Demo.isTriangle(13,2,4));
    
    }
    
    @Test
    public void triangle_test_7() {
        assertFalse(Demo.isTriangle(4,3,15));
    }
    
    @Test
    public void triangle_test_8() {
        assertFalse(Demo.isTriangle(3,22,4));
    }
    
    @Test
    public void triangle_test_9() {
        assertFalse(Demo.isTriangle(24,2,5));
    }
    
    @Test
    public void triangle_test_10() {
        assertFalse(Demo.isTriangle(34,2,4));
    }
    
    
    @Test
    public void triangle_test_11() {
        assertFalse(Demo.isTriangle(2,10,4));
        
    }
    
    

}

How to make spy return an object when a function has been called?

Basically, what I want to do is letting the open function return a MatDialogRef object which should have a method afterClosed to spy on.

code.ts

private myFunction(){
    const dialogRef = this.dialog.open(***);
    this.dialogCloseSubscription$ = dialogRef
      .afterClosed()
      .subscribe(result => {
        if (result) {
          this.loadedTimeframe = new MonthInfo(
            moment()
              .year(result.year)
              .month(result.month)
              .startOf('month')
          );
          this.setLabel();
        }
      });
}

I accomplished having a spy object for the dialog, but I am not sure how to continue.

code.spec.ts

describe('code test', () => {
  const dialogSpy = jasmine.createSpyObj('MatDialog', ['close', 'open']);

  beforeEach(async(() => {
    TestBed.configureTestingModule({
      declarations: [MonthSelectorComponent],
      imports: [BrowserAnimationsModule],
      providers: [
        {
          provide: MatDialog,
          useValue: dialogSpy,
        },
      ],
    }).compileComponents();

  }));

  it('should open dialog', () => {
    spyOn(component.dialog, 'open').and.returnValue(
      jasmine.createSpyObj('MatDialogRef', ['afterClosed'])
    );
    component.openDateDialog();
  });
});

How to let my dialogSpy return a MatDialogRef object containing the afterClosed method when the open function is called?

Getting error while calling variable at run time using jenkins

If i do the execution in local it is working fine with the same command, But if i hit the same command in Jenkins Getting below error. Can someone please guide me on this.

cd C:\temp\Project path\



IF  "%Execution_File_Name%" == "Tests" (

             echo "Full suite execution scheduled"

             pabot --testlevelsplit --processes 6 --ordering order_file.pabotsuitenames -v 
             URL:%URL% -v username:%UserName% -v password:%Password% -v project:%Password%
             --removekeywords WUKS -d Results %Execution_File_Name%

    ) ELSE (

          echo "Module execution started"

          robot --removekeywords WUKS -d Results -v URL:%URL% -v username:%UserName% -v 
          password:%Password% -v project:%Project% Tests\%Execution_File_Name%
          )

exit 0

Getting below error in Jenkins as -

>    "Module execution started"
>[ ERROR ] Expected at least 1 argument, got 0.
>Try --help for usage information.
>The filename, directory name, or volume label syntax is incorrect.

Not able to fetch and checkout to Git as i get Permission Denied

Trying to fetch and checkout to Branch tried all the options given in different blogs where was able to clone to git by entering git push https://github.com/repo.git master but not able to fetch and checkout at branch and need help to get this done .

Can some one pls let me know what needs to be done?

from xUnit format to beautiful test reports on GitHub

I have a number of tests on GitHub, output test results in xUnit format. i.e. I have testResult.xml file in xUnit format. Is it possible to see test reports in GitHub (not outside) like in Team City or Jenkins?

Redux saga testing with api call using jest

I am trying to test my saga function that have an api call. However, I think I made a mistake or something wrong where the expected and the received values are different. expected: is the data and the correct action type received: an error with the failure action type. Please refer to the picture and the code of my saga and saga.test. I would appreciate if someone can help out and guide me to the correct way for testing the saga with an api call.

Saga testing results

Saga.js

import { failure, loadBrowseLocationsSuccess, loadBrowseClassificationsSuccess } from "./actions";
import ApiConstants, * as actionTypes from "./constants";
import { all, put, takeLatest, call } from "redux-saga/effects";
import ApiService from "../../services/api_service";

export default function* rootSalarySaga() {
  yield all([
    takeLatest(actionTypes.LOAD_BROWSE_LOCATIONS_CLASSIFICATION, loadBrowseLocationsClassificationsData),
  ]);
}


export function* loadBrowseLocationsClassificationsData() {
  const requestUrl = ApiConstants.BROWSE_LOCATIONS_CLASSIFICATIONS_URL;

  const res = yield ApiService.Get(requestUrl);
  if (res.status === 200) {
    yield put(loadBrowseLocationsSuccess(res.data[locations]));
    yield put(loadBrowseClassificationsSuccess(res.data[classifications]));
  } else {
    yield put(failure(res.data));
  }
}

Saga.test

import { put, call } from "redux-saga/effects";
import { loadBrowseLocationsClassificationsData } from "../saga";
import ApiService from "../../../services/api_service";
import ApiConstants, * as actionTypes from "../constants";
import { failure, loadBrowseLocationsSuccess, loadBrowseClassificationsSuccess } from "../actions";

describe("saga testing for browse container", () => {
  it("should dispatch action LOAD_BROWSE_LOCATIONS_SUCCESS with results from loadBrowseLocationsClassificationsData", () => {
    const generator = loadBrowseLocationsClassificationsData();
    const requestUrl = ApiConstants.BROWSE_LOCATIONS_CLASSIFICATIONS_URL;
    const response = { data: { results: "mockData" } };

    expect(generator.next().value).toEqual(ApiService.Get(requestUrl));
    expect(generator.next(response).value).toEqual(put(loadBrowseLocationsSuccess("mockData")));
    expect(generator.next(response).value).toEqual(put(loadBrowseClassificationsSuccess("mockData")));
    expect(generator.next()).toEqual({ done: true, value: undefined });
  });
});

for api_service.get function is just an axios function.

import axios from "axios";

class ApiService {
 static Get(requestUrl) {
    return axios
      .get(requestUrl)
      .then((response) => {
        return response;
      })
      .catch((error) => {
        return error.response;
      });
  }
}

export default ApiService;

mardi 29 septembre 2020

Mocking dataframe for testing in python

I am trying to mock a dataframe for unit testing in python. here is the form of the table:

Example record

However, I am having trouble hard coding a dataframe to have a json within it. This is my attempt at mocking a record:

        test_strip = pd.DataFrame(
        [[1],
         "2020-09-21",
         [{'WTI': {'1604188800000': '40.25',
                 '1606780800000': '40.51',
                 '1609459200000': '40.81',
                 '1612137600000': '41.13',
                 '1614556800000': '41.45',
                 '1617235200000': '41.74',
                 '1619827200000': '42.00',
                 '1622505600000': '42.22'}
         }],
         [1]],
        columns=['id','trade_date','prices','contract']
    )

This yields the following error:

ValueError: 4 columns passed, passed data had 10 columns

So it appears it is interpreting my json as separate records. Appreciate any help anyone might have.

Is physically possible to create a secure electronic voting

I am wondering would it ever be feasible to create a secure online voting method, in regards to the limits of testing and development, could there be a system made well enough that it could be used.

Cypress reuse test cases in multiple test suites

I am new to Cypress and trying out one POC.

The application I am working on requires me to test the same component in different test suites. Is there any why in which I can avoid rewriting the same chunk of code, like using function?

export function testWelcomeBanner() {
    ...
 welcomeBanner.should('have.text', 'welcome');
    ...
}

I did try some ways, like trying to call this function from it block in test suites etc. but received errors. Any help appreciated.

testCafe slowing the tests down after fileUpload

I am automating a data capture form. The first element of this form is an image file upload control & I'm uploading a 5KB image. But after this step, the test executes the next step which is writing a text into a textbox & then it halts for nearly 30 seconds. How to reduce this un-necessary wait?

    test('Test 1', async t => {
  await listingPage.uploadImage();
  await listingPage.listAnItem();
  ..

Upload image code

  async uploadImage() {
    await t.setFilesToUpload(this.imageInput, ['../../uploads/photo02.jpg']);
  }

** Data entry code block **

async listAnItem() {
    await t
      .typeText(Selector('#description'), this.DESCRIPTION) /* --> Application is slowing after this step */
      .click(this.categorySelect)
      .click(Selector('#react-select-2-option-0-0'))

.testcaferc.json file

{
    "browsers": [ "chrome", "safari" ],
    "remoteChromeVersion": "80",
    "src": "specs",
    "reporter": [
      {
        "name": "spec"
      }
    ],
    "speed": 1,
    "debugOnFail": false,
    "skipJsErrors": true,
    "selectorTimeout": 20000,
    "assertionTimeout": 20000,
    "pageLoadTimeout": 8000
  }

DOM Snippet of file-upload

<input aria-describedby="error__images" aria-label="Upload an image" tabindex="0" type="file" multiple="" accept="image/jpeg, image/png" data-testid="imageInput" class="ImageInputstyles__Input-sc-2l692w-7 aosWu">

Diffrent number of tests on each run of dotnet test on TC

I have a weird behavior when testing my .NET Core projects on TeamCity. When running test by simply using built in 'dotnet test' functionality I have different number of tests each time I'm running the build. It is hard to see what test is missing because report from TeamCity in form of CSV has different formatting each time (for instance sometimes it has name of the dll in front and for same test in other run there is only test name). Any ideas?

Tools versions:

  • TeamCity 2020.1
  • .NET Core 3.1.200
  • MSBuild 16.5

All tests have

  • Microsoft.NET.Test.SDK 16.7.1
  • xunit 2.4.1
  • xunit.runner 2.4.3
  • nunit 3.12
  • nunit.testAdapter 3.15.1

Azure kubernetes - testing? [closed]

I am planning to migrate the existing application to Azure kubernetes. At present, we have the unit testing and integration testing.

Is there any other testing to be done specific to AKS deployment? Like scaling, canary deployment?

What are good strategies for testing connections from the perspective of a geographically distant user?

I have a site that is hosted in the Eastern US and has users in East Asia. We're adding some interactive features to the site that require a round trip to the server and I'm concerned about latency impacting the global users' experience. What's a good way to test a site from the perspective of someone with reliable internet that just happens to be located far away from your server?

I've experimented with the throttling settings in the browser dev tools but it's hard to know where to set things. VPNs or ssh tunnels seem like they'd be more realistic but then I'm essentially doubling the distance so maybe I should pick a location halfway?

How to configure the network interface on WANem 3.0

I am attempting to run WANem 3.0 on a virtual box (6.0) VM on centos 7.2. The documentation at http://wanem.sourceforge.net/documentation.html states that during bootup I should be prompted for whether or not to use dhcp. That I should respond no, and I will be given the opportunity to specify a static IP address. However I never see that prompt. I boot directly to the GUI with the WANem home page in the browser. I have tried with both a NAT and bridged adapter configured for the virtual machine. With NAT I get what looks like a virtual box default IP address of 10.0.2.15. If I boot with the bridged adapter, I am getting an IP address from our local DHCP server. I can watch the network initialization from the boot console which is how I obtain the IP addresses but I am never given the opportunity to set them. There is a terminal window available but it only offers a very limited set of commands and network configuration is not among them. I would like to setup isolated virtual networks with static ip address, but without being able to configure the WANem machine I don't know how I could do that. From googling around it is my understanding that the documentation has not been updated since WANem 2.2. Any help in configuring 3.0 would be greatly appreciated.

Why specification-arg-resolver Join aliases don't work when created manually?

My application and end-to-end tests works just fine, they find proper items by tag in a db. I use in controller layer annotation-based specification-arg-resolver to easily handle requests with many parameters and create for me specifications. The problem lies in a @DataJpaTest class where I manually create Specification. Tests that uses joins fail on this:

Caused by: java.lang.IllegalArgumentException: Unable to locate Attribute  with the the given name [pt] on this ManagedType [com.tbar.makeupstoreapplication.model.Product]
at org.hibernate.metamodel.model.domain.internal.AbstractManagedType.checkNotNull(AbstractManagedType.java:147)
at org.hibernate.metamodel.model.domain.internal.AbstractManagedType.getAttribute(AbstractManagedType.java:118)
at org.hibernate.metamodel.model.domain.internal.AbstractManagedType.getAttribute(AbstractManagedType.java:43)
at org.hibernate.query.criteria.internal.path.AbstractFromImpl.locateAttributeInternal(AbstractFromImpl.java:111)
at org.hibernate.query.criteria.internal.path.AbstractPathImpl.locateAttribute(AbstractPathImpl.java:204)
at org.hibernate.query.criteria.internal.path.AbstractPathImpl.get(AbstractPathImpl.java:177)
at net.kaczmarzyk.spring.data.jpa.domain.PathSpecification.path(PathSpecification.java:50)
at net.kaczmarzyk.spring.data.jpa.domain.In.toPredicate(In.java:59)
at net.kaczmarzyk.spring.data.jpa.domain.EmptyResultOnTypeMismatch.toPredicate(EmptyResultOnTypeMismatch.java:55)
at net.kaczmarzyk.spring.data.jpa.domain.Conjunction.toPredicate(Conjunction.java:80)
at net.kaczmarzyk.spring.data.jpa.domain.Conjunction.toPredicate(Conjunction.java:80)
at org.springframework.data.jpa.repository.support.SimpleJpaRepository.applySpecificationToCriteria(SimpleJpaRepository.java:762)
at org.springframework.data.jpa.repository.support.SimpleJpaRepository.getQuery(SimpleJpaRepository.java:693)
at org.springframework.data.jpa.repository.support.SimpleJpaRepository.getQuery(SimpleJpaRepository.java:651)
at org.springframework.data.jpa.repository.support.SimpleJpaRepository.findAll(SimpleJpaRepository.java:443)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.data.repository.core.support.RepositoryComposition$RepositoryFragments.invoke(RepositoryComposition.java:371)
at org.springframework.data.repository.core.support.RepositoryComposition.invoke(RepositoryComposition.java:204)
at org.springframework.data.repository.core.support.RepositoryFactorySupport$ImplementationMethodExecutionInterceptor.invoke(RepositoryFactorySupport.java:657)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.doInvoke(RepositoryFactorySupport.java:621)
at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.invoke(RepositoryFactorySupport.java:605)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.data.projection.DefaultMethodInvokingMethodInterceptor.invoke(DefaultMethodInvokingMethodInterceptor.java:80)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:366)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:118)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:139)

The failing test look like this:

@Test
void shouldFindProductWithTag() {
    Product expectedProduct = createProductWithTags(1L, "ProperTag");
    productRepository.save(expectedProduct);

    Specification<Product> specification = createSpecWithTag("ProperTag");
    Pageable pageable = PageRequest.of(0, 12);

    Page<Product> actualPageOfProducts = productRepository.findAll(specification, pageable);

    assertEquals(expectedProduct, actualPageOfProducts.getContent().get(0));
}
private Conjunction<Product> createSpecWithTag(String... tagName) {
    return new Conjunction<>(createJoin("productTags", "pt"),
            new Conjunction<>(createIn("pt.name", tagName)));
}

private Join<Product> createJoin(String pathToJoinOn, String alias) {
    return new Join<>(
            new WebRequestQueryContext(nativeWebRequestMock), pathToJoinOn, alias, JoinType.INNER, true);
}

private EmptyResultOnTypeMismatch<Product> createIn(String path, String[] expectedValue) {
    return new EmptyResultOnTypeMismatch<>(
            new In<>(
                    new WebRequestQueryContext(nativeWebRequestMock), path, expectedValue,
                    Converter.withTypeMismatchBehaviour(OnTypeMismatch.EMPTY_RESULT)));
}

I copied the structure of the createSpecWithTags private method from the debugging process of real, working application where Specification's toString method looks like this:

Conjunction [innerSpecs=[Join [pathToJoinOn=productTags, alias=pt, joinType=INNER, queryContext=WebRequestQueryContext [contextMap={}], distinctQuery=true], Conjunction [innerSpecs=[EmptyResultOnTypeMismatch [wrappedSpec=net.kaczmarzyk.spring.data.jpa.domain.In@90beeb55]]]]]

Model

@Entity
@Table
@Data
@NoArgsConstructor
public class Product {

    @Id
    private Long id;
    @OneToMany(cascade = CascadeType.ALL, fetch = FetchType.EAGER)
    @JoinColumn(name = "PRODUCT_ID")
    private Set<ProductTag> productTags;
}
@Entity
@Table
@Data
@NoArgsConstructor
@RequiredArgsConstructor
public class ProductTag {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    @EqualsAndHashCode.Exclude
    private int id;
    @NonNull
    private String name;
}

Repository

public interface ProductRepository extends JpaRepository<Product, Long> {

    Page<Product> findAll(Specification<Product> specification, Pageable pageable);
}

Controller method

@GetMapping
    public String shopPage(Model model,
        @Join(path = "productTags", alias = "pt")
        @And({
            @Spec(path = "pt.name", params = "product_tags", spec = In.class)
    }) Specification<Product> productSpecification, Pageable pageable) throws ProductsNotFoundException {
        Page<Product> pageOfProducts = makeupService.findProducts(productSpecification, pageable);
        addAttributesToShopModel(model, pageOfProducts);
        return ViewNames.SHOP;
    }

Jest test functions that define variables in it

I want to write some Jest tests for JavaScript functions in a .mjs file. Within these functions, variables are defined by calling other functions. For example:

export function getCurrentVideo() {
     var currentVideo = VideoList.GetCurrentVideo()
    console.log(currentVideo);
    return "This is the video:" + currentVideo
}

In this case I will receive undefined, right? Because VideoList.GetCurrentVideo can't be reached.

Test will be something like:

const  getCurrentVideo  = import('../').getCurrentVideo;

describe('getCurrentVideo', () => {
    test('if getCurrentVideo will return "This is the video:" + currentVideo', () => {
        expect(getCurrentVideo).toBe('This is the video:" + currentVideo');
    });
});

I know you can add parameters to the function, but that will mean that I have to re-write the functions just for test purposes. It isn't my code, it's a huge project where the owner wants some tests for it.

Programmatically set an url parameter in gin context for testing purpose

I am writing some test suites for gin middlewares. I found a solution to test them without having to run a full router engine, by creating a gin context like this :

w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)

The goal is to test my function by calling :

MyMiddleware(c)

// Then I use c.MustGet() to check if every expected parameter has been transmitted to gin
// context, with correct values.

One of my middlewares relies on c.Param(). Is it possible to programatically set an Url param in gin (something like c.SetParam(key, value)) before calling the middleware ? This is only for test purpose so I don't mind non-optimized solutions.

Java code development without continuous maven build, deploy and tomcat restart n spring application

  1. My spring application required maven-build and deploy and tomcat restart for each and every java changes. Is there any tool or way to avoid these processes to speed up my java development?

  2. How can we mock a real-time java object that can use and test in the main class?

  3. Is there any way to speed up debugging and testing and editing java codes (with some trial and run codes)

JUnit5: Where is the root of @CsvFileSource defined and can that definition be changed to refer to a different directory?

I started a project as a maven-quickstart and added JUNIT5. There was no resources folder anywhere.

Some time after, one of the testers wanted to add a test that reads from a CSV file. He had some trouble setting up and I recalled from just memory that it will look in test/resources.

We are all fine now but I just can't stop wondering: Is 'test/resources' hard-coded into JUnit? Or is it somehow derived from the project archetype?

Is there a way to edit this reference in the project settings, vm settings or maybe in the very test method?

Jest doesn't see tests described with test() function, but works ok with it()

When describing tests using:

it('foo', () => {...})

all tests pass.

But when I change the test to

test('foo', () => {...})

Jest fails with "Your test suite must contain at least one test."

Found a similar question, but nothing worked.

Using latest Jest, have the following presets in babel.config.js:

 'presets': [
    '@babel/preset-env',
    '@babel/preset-react'
 ],

Jest: Logic issue in testing a function

I want to test a function called getRunningContainers which basically returns all the running containers on my system. It uses 2 functions inside of it called listContainers and getContainer to work.

  async getRunningContainers(): Promise<ContainerInspectInfo[] | undefined> {
    const runningContainers: ContainerInspectInfo[] = [];
    const opts: any = {
      filters: {
        status: ['running'],
      },
    };
    try {
      const containers: ContainerInfo[] = await this.client.listContainers(
        opts
      );

      for (const c of containers) {
        const container: ContainerInterface = await this.client.getContainer(
          c.Id
        );
        const containerInspect: ContainerInspectInfo = await container.inspect();
        runningContainers.push(containerInspect);
      }

      return runningContainers;
    } catch (err) {
      console.log(`running containers error: ${err}`);
      return;
    }
  }

I am writing the following Jest test wherein the dummy_container is an object representing a container and dummy_container_response is an array of the same and dummy_info is an object representing ContainerInspectInfo. I am getting undefined returned from the getRunningContainer and I can't seem to figure out the reason.

describe('Container Client', () => {
    let containerData = dummy_container_responses;
    const MockDockerClient = {
        listContainers: (opts: any) => new Promise((resolve, reject) => {
            resolve(dummy_info);
        }),

        createContainer: (opts:any) => new Promise((resolve, reject)=> {
            resolve(dummy_container);
        }),

        getContainer: (id: any) => new Promise((resolve, reject) => {
            resolve(dummy_container)
        })
    }

    const containerClient = new Container(MockDockerClient);

    test('starting a container', async() => {
        const container = await containerClient.create(dummy_container);
        const containers = await containerClient.getRunningContainers()

        expect(containers).toEqual(dummy_container)
       
        
    })
}) 

I'm not sure if any more information is needed so please feel free to let me know if it is. Thanks!

import {
  Container as ContainerInterface,
  ContainerCreateOptions,
  ContainerInfo,
  ContainerInspectInfo,
  HostConfig,
} from 'dockerode';

export class Container {
  client: any;

  constructor(client: any) {
    this.client = client;
  }

  async create(
    opts: any
  ): Promise<ContainerInterface | undefined> {
    let container: ContainerInterface;
    const { name, Image, Cmd, HostConfig, Labels, Entrypoint, Env } = opts;
    try {
      const createOpts: ContainerCreateOptions = {
        name,
        Image,
        Cmd,
        HostConfig,
        Labels,
        Entrypoint,
        Env,
      };
      container = await this.client.createContainer(createOpts);
      return container;
    } catch (err) {
      console.log(`create container error: ${err}`);
      return;
    }
  }

  async getRunningContainers(): Promise<ContainerInspectInfo[] | undefined> {
    const runningContainers: ContainerInspectInfo[] = [];
    const opts: any = {
      filters: {
        status: ['running'],
      },
    };
    try {
      const containers: ContainerInfo[] = await this.client.listContainers(
        opts
      );

      for (const c of containers) {
        const container: ContainerInterface = await this.client.getContainer(
          c.Id
        );
        const containerInspect: ContainerInspectInfo = await container.inspect();
        runningContainers.push(containerInspect);
      }

      return runningContainers;
    } catch (err) {
      console.log(`running containers error: ${err}`);
      return;
    }
  }

  static newContainerConfig(
    oldContainer: ContainerInspectInfo,
    newImage: string
  ): ContainerCreateOptions {
    const config: ContainerCreateOptions = {
      name: oldContainer['Name'].replace('/', ''),
      Image: newImage,
      Cmd: oldContainer['Config']['Cmd'],
      HostConfig: oldContainer['HostConfig'],
      Labels: oldContainer['Config']['Labels'],
      Entrypoint: oldContainer['Config']['Entrypoint'],
      Env: oldContainer['Config']['Env'],
    };
    return config;
  }
}

How to upload a file using QA Wolf and Puppeteer?

I am struggling to upload a file using qawolf and puppeteer through an input. According to qawolf, they plan to develop their own upload file feature, so in the meantime I need to use the one from puppeteer's (which qawolf is dependent of).

However, I keep running into the error:

  FAIL  .qawolf/uploadDataSetInProject.test.js (17.394 s)
  ✕ uploadDataSetInProject (13354 ms)

  ● uploadDataSetInProject

    TypeError: uploadElement.uploadFile is not a function

      42 |   await page.fill('[placeholder="Data set description"]', "I am a dataset description");
      43 |   const uploadElement = await page.$('#dataset-inputs > div.project-upload-container > div.project-upload-dataset > input[type=file]');
    > 44 |   await uploadElement.uploadFile('../assets/TestFile10Mo.csv');
         |                       ^
      45 |   await qawolf.scroll(page, "html", { x: 0, y: 571 });
      46 |   await page.click(".button-save");
      47 |   await page.click(".comments-button");

      at Object.test (uploadDataSetInProject.test.js:44:23)

Whereas the puppeteer's docs clearly state that the function uploadFile exists !

Where does this problem come from ? How come uploadFile is not a function ?

Here is my code:

const qawolf = require("qawolf");

let browser;
let context;

beforeAll(async () => {
  browser = await qawolf.launch();
  context = await browser.newContext();
  await qawolf.register(context);
});

afterAll(async () => {
  await qawolf.stopVideos();
  await browser.close();
});

test("uploadDataSetInProject", async () => {
  const page = await context.newPage();
  [...]
  await page.click('[placeholder="Data set description"]');
  await page.fill('[placeholder="Data set description"]', "I am a dataset description");
  const uploadElement = await page.$('#dataset-inputs > div.project-upload-container > div.project-upload-dataset > input[type=file]');
  await uploadElement.uploadFile('../assets/TestFile10Mo.csv'); // The test crashes here
  await page.click(".button-save");
  await page.click(".comments-button");
  await page.click(".size");
  await page.click(".off");
});

Automatic test for Excel export

I have a simple method, which takes an IEnumerable of an object and export that as an Excel workbook. The object could be something like a student (to use the classic example).

public class Student 
{
    public string Name { get; set; }
    public int Age { get; set; }
    public int StudentId { get; set; }
}

And then the method I need to test

public void ExportStudents(IEnumerable<Student> students)
{
    // This method will save the students to an Excel workbook with a known name
    ...
}

I would like some sort of test of this, but the best I can come up with is a console application that you can run manually, something like this:

public void Main() 
{
    var students = new List<Student>
    {
        { Name = "John Doe", Age = 19, StudentId = 1 }, 
        { Name = "Max Power", Age = 21, StudentId = 2 },
        ...etc
    }

    // Export the list as a workbook
    WorkbookExporter.ExportStudents(students);
}

And then inspect the workbook manually. But this seems like a very slow and primitve way of testing. How do you test something like this - or to take another example, something sending an e-mail - in a smarter way?

TestNG drag and drop issue

I need to do a drag and drop in my website .I have successfully drag and hold my element but I am facing difficulty o drop my element.

@Test (priority=8)
 public void openSectionArrow() throws InterruptedException {
          Thread.sleep(3000);
          driver.findElement(By.xpath("//*[@id=\"section-sortable\"]/div/div[1]/div[1]/span/a")).click();
          System.out.println("click on section arrow");
 }
    @Test (priority=8)
    public void dragDrop() throws InterruptedException {
          WebElement srcElement = driver.findElement(By.xpath("/html/body/app-root/app-wrapper/app-template-manager/div/main/div/app-home/div/div/div[2]/div/div[3]/div/div[1]/div[2]/div/div/div/div[1]/div/div[1]"));
          WebElement targetElement = driver.findElement(By.xpath("/html/body/app-root/app-wrapper/app-template-manager/div/main/div/app-home/div/div/div[2]/div/div[2]/div[2]/div[2]/div/div[1]/div/div[2]/div/div/div[1]/gridster/div[2]"));
          Actions builder = new Actions(driver);
          builder.clickAndHold(srcElement).perform(); 
          Thread.sleep(5000);
          builder.moveToElement(targetElement).release().build().perform();       
    }
   

Stored Procedure Database Testing SQL Server [closed]

What is the Best way to test Stored Procedure in SQL Server ?

lundi 28 septembre 2020

I can't make test pass - phpunit

I have a problem with writing test for pdf generation. I have controller, service and repository, so service is injected in my controller and it's calling method which is part of repository class. My test always returns error 500, probably because missing dependencies. Does anybody knows how to make test passing?

This is the code:

<?php

namespace Smart\Http\Controllers\Warehouse\Article;

use Barryvdh\DomPDF\Facade as PDF;
use Smart\Http\Controllers\Controller;
use Smart\Models\Warehouse\Article;
use Smart\Services\Warehouse\ArticleService;
use Smart\Services\Warehouse\WarehouseService;

class PdfProfileController extends Controller
{
    /**
     * @var ArticleService
     */
    private ArticleService $articleService;

    /**
     * PdfProfileController constructor.
     *
     * @param ArticleService $articleService
     */
    public function __construct(ArticleService $articleService)
    {
        $this->articleService = $articleService;
    }

    /**
     * @param Article $article
     * @param string|null $report
     *
     * @return mixed
     */
    public function __invoke(Article $article, string $report = null)
    {
        $articlesByCenterAndWarehouse = $this->articleService->warehousesOfArticle($article->id);

        $pdf = PDF::loadView('warehouses.articles.templates.pdf-single-report', [
            'article' => $article,
            'warehouses' => $articlesByCenterAndWarehouse,
            'stateOnDate' => request()->date,
            'warehouseService' => app(WarehouseService::class),
        ])
            ->setPaper('A4', 'portrait');

        return $pdf->stream($article->name.'.pdf');
    }
}
<?php

namespace Smart\Services\Warehouse;

use Illuminate\Http\Request;
use Smart\Exceptions\General;
use Smart\Models\Warehouse\Article;
use Smart\Repositories\Warehouse\ArticleRepository;

class ArticleService
{
    /**
     * @var ArticleRepository
     */
    private ArticleRepository $articleRepository;

    /**
     * ArticleService constructor.
     *
     * @param ArticleRepository $articleRepository
     */
    public function __construct(ArticleRepository $articleRepository)
    {
        $this->articleRepository = $articleRepository;
    }

    /**
     * @param int $id
     *
     * @return mixed
     */
    public function warehousesOfArticle(int $id)
    {
        return $this->articleRepository->getWarehousesOfArticle($id);
    }
}

<?php

namespace Smart\Repositories\Warehouse;

use Illuminate\Support\Facades\DB;
use Smart\Models\Center\Building;
use Smart\Models\Center\Room;
use Smart\Models\Codebooks\PurposeOfBuilding;
use Smart\Models\Codebooks\PurposeOfRoom;
use Smart\Models\Warehouse\Article;
use Smart\Repositories\BaseRepository;

class ArticleRepository extends BaseRepository
{
    /**
     * ArticleRepository constructor.
     *
     * @param Article $model
     */
    public function __construct(Article $model)
    {
        $this->model = $model;
    }

    /**
     * @param int $id
     *
     * @return mixed
     */
    public function getWarehousesOfArticle(int $id)
    {
        $articleIncoming = DB::table('article_incoming_document')
            ->where('article_id', $id)
            ->select('article_id', 'warehouse as warehouse_id', 'warehouse_type', DB::raw('sum(quantity) as quantity'))
            ->groupBy(['article_id', 'warehouse', 'warehouse_type']);

        $articleOutgoing = DB::table('article_outgoing_document')
            ->where('article_id', $id)
            ->select('article_id', 'warehouse as warehouse_id', 'warehouse_type', DB::raw('sum(quantity) as quantity'))
            ->groupBy(['article_id', 'warehouse', 'warehouse_type']);

        $rooms = DB::table('rooms')
            ->select([
                'rooms.id',
                'rooms.name as warehouse',
                'centers.name as center',
                DB::raw('coalesce(incoming.quantity,0) - coalesce(outgoing.quantity,0) as quantity')
            ])
            ->selectRaw('"Smart\\\Models\\\Center\\\Room" as type')
            ->join('buildings', 'buildings.id', '=', 'rooms.building_id')
            ->join('centers', 'centers.id', '=', 'buildings.center_id')
            ->leftJoin('purpose_of_rooms', 'purpose_of_rooms.id', '=', 'rooms.purpose_of_room_id')
            ->where('purpose_of_rooms.name', PurposeOfRoom::WAREHOUSE_ROOM)
            ->leftJoinSub($articleIncoming, 'incoming', function ($join) {
                $join->on('rooms.id', '=', 'incoming.warehouse_id')
                    ->where('incoming.warehouse_type', '=', Room::class);
            })
            ->leftJoinSub($articleOutgoing, 'outgoing', function ($join) {
                $join->on('rooms.id', '=', 'outgoing.warehouse_id')
                    ->where('outgoing.warehouse_type', '=', Room::class);
            })
            ->having('quantity', '>', 0);

        $warehouses = DB::table('buildings')
            ->select([
                'buildings.id',
                'buildings.name as warehouse',
                'centers.name as center',
                DB::raw('coalesce(incoming.quantity,0) - coalesce(outgoing.quantity,0) as quantity')
            ])
            ->selectRaw('"Smart\\\Models\\\Center\\\Building" as type')
            ->join('centers', 'centers.id', '=', 'buildings.center_id')
            ->join('purpose_of_buildings', 'purpose_of_buildings.id', '=', 'buildings.purpose_of_building_id')
            ->where('purpose_of_buildings.name', PurposeOfBuilding::WAREHOUSE_BUILDING)
            ->leftJoinSub($articleIncoming, 'incoming', function ($join) {
                $join->on('buildings.id', '=', 'incoming.warehouse_id')
                    ->where('incoming.warehouse_type', '=', Building::class);
            })
            ->leftJoinSub($articleOutgoing, 'outgoing', function ($join) {
                $join->on('buildings.id', '=', 'outgoing.warehouse_id')
                    ->where('outgoing.warehouse_type', '=', Building::class);
            })
            ->having('quantity', '>', 0)
            ->union($rooms);

        return DB::query()->fromSub($warehouses, 'warehouses')->get();
    }
}

Is there an idiomatic way to setup CI regression tests with nix?

I've been setting up GitLab CI using nix. My current approach is to have a default.nix which contains the packages provided by a git repository.

{
    package_a = import ./package_a/default.nix;
    package_b = import ./package_b/default.nix;
}

CI can then merely invoke nix-build and build/check everything.

The problem comes up with regression tests. I need to run a python regression test which executes something exported from package_a, using a file from package_c (which is not yet managed by nix).

Two possibilities I've considered are either a mkDerivation that manually executes during the check phase, or buildPythonPackage with the regression tests set up as the tests for an empty python package. Both seem like kind of a hack.

Is there an idiomatic way of setting this up with nix? Thanks.

App doesn't start on Internal / Alpha Testing

I've just finished recreating my iOS App for Android and have just started Internal / Alpha Testing for the first time. I have passed the review (although I don't know why they need this for 'internal' testing) and have waited for a long time to get it ready for the testing via Google Play Store. Now, finally I can, downloaded it to my device but as soon as I pressed Open, the app won't start. It just flashes the screen and that's it.

I tested in Debug Mode in Android Studio before and after this without changing anything. It just worked fine.

I contacted Google, have talked over chats as well as emails, but nothing is helping. They gave me this link: https://support.google.com/googleplay/?p=agent_troubleshooter , not helping.

So, is there any way I can know the cause of this?? Any way to get some logs out of it? Is there any settings I might have done wrong to create .aab? (although I think Google Play console won't accept it if anything is wrong...)

Any help or clue to solve this issue will be greatly appreciated.

How use QT Creator and Squish COCO

I'm writing unit tests, but I would like to visualize the test coverage, my project is in QT, I have read a lo,t but I don't understand how to apply it to my code!! What are the specific steps? Could you send and code example?

How can I use jest leak detector correctly?

I wrote my test code named test.test.ts. This require testUtils.ts as client and call createClient() and removeClient().

test('test', async () => {
  let client = require('../testUtils')
  expect(1).toBe(1)
  jest.resetModules()
  await client.createClient()
  await client.removeClient()
  let detector = new LeakDetector(client)
  console.log(await detector.isLeaking())
  client = null
  jest.resetModules()
  console.log(await detector.isLeaking())
  global.gc()
})

And here is testUtils.ts. This file require and use some modules. After all job I call removeClient() in test.test.ts file. So I expect all of modules must be garbage collected. But they didn't...

import LeakDetector from 'jest-leak-detector'
export let testClient = null
export let query = null
export let mutate = null

let server = null
let {ApolloServer} = require('apollo-server-express')
let {createTestClient} = require('apollo-server-testing')
let jwt = require('jsonwebtoken')
let {Config} = require('./config')
let {db} = require('my sequelize model')
let {basicDefs, mutationDefs} = require('./defs')
let {AuthDirective} = require('./directives')
let {resolvers} = require('./resolvers')

export const generateToken = async () => {
  const user = await db.BlahBlah.findOne({
    attributes: ['BlahBlah1', 'BlahBlah2'],
    where: {
      username: 'BlahBlah',
    },
  })
  return jwt.sign({BlahBlah: BlahBlah, BlahBlah: BlahBlah2}, BlahBlah3)
}

export const createClient = async () => {
  if (!testClient) {
    server = new ApolloServer({
      typeDefs: [basicDefs, mutationDefs],
      resolvers,
      schemaDirectives: {
        auth: AuthDirective,
      },
      engine: false,
      context: {
        req: {
          headers: {
            authorization: `BlahBlah ${await generateToken()}`,
          },
        },
      },
      formatError: (err) => {
        if (err.message.startsWith('jwt must be provided')) {
          return new Error('no login information')
        } else return new Error(err.message)
      },
    })
    testClient = createTestClient(server)
    query = testClient.query
    mutate = testClient.mutate
  }
}

export const removeClient = async () => {
  jest.resetModules()
  await server.stop()

  let detector = new LeakDetector(testClient)
  console.log(await detector.isLeaking())
  testClient = null
  global.gc()
  console.log(await detector.isLeaking())

  detector = new LeakDetector(mutate)
  console.log(await detector.isLeaking())
  mutate = null
  global.gc()
  console.log(await detector.isLeaking())

  detector = new LeakDetector(query)
  console.log(await detector.isLeaking())
  query = null
  global.gc()
  console.log(await detector.isLeaking()) // here

  detector = new LeakDetector(ApolloServer)
  console.log(await detector.isLeaking())
  ApolloServer = null
  global.gc()
  console.log(await detector.isLeaking()) // here

  detector = new LeakDetector(server)
  console.log(await detector.isLeaking())
  server = null
  global.gc()
  console.log(await detector.isLeaking())

  detector = new LeakDetector(createTestClient)
  console.log(await detector.isLeaking())
  createTestClient = null
  global.gc()
  console.log(await detector.isLeaking())

  detector = new LeakDetector(jwt)
  console.log(await detector.isLeaking())
  jwt = null
  global.gc()
  console.log(await detector.isLeaking()) // here

  detector = new LeakDetector(Config)
  console.log(await detector.isLeaking())
  Config = null
  global.gc()
  console.log(await detector.isLeaking()) // here

  detector = new LeakDetector(db)
  console.log(await detector.isLeaking())
  db = null
  global.gc()
  console.log(await detector.isLeaking()) // here

  detector = new LeakDetector(basicDefs)
  console.log(await detector.isLeaking())
  basicDefs = null
  global.gc()
  console.log(await detector.isLeaking()) // here

  detector = new LeakDetector(mutationDefs)
  console.log(await detector.isLeaking())
  mutationDefs = null
  global.gc()
  console.log(await detector.isLeaking()) // here

  detector = new LeakDetector(AuthDirective)
  console.log(await detector.isLeaking())
  AuthDirective = null
  global.gc()
  console.log(await detector.isLeaking())

  detector = new LeakDetector(resolvers)
  console.log(await detector.isLeaking())
  resolvers = null
  global.gc()
  console.log(await detector.isLeaking())
}

I expected each second console.log print false because I removed references by assigning null and requested garbage collection, but some console.log(marked // here) print true.

Plus, I got this error when I ran node --expose-gc ./node_modules/.bin/jest --logHeapUsage --detectLeaks -- test.test.ts

EXPERIMENTAL FEATURE!
Your test suite is leaking memory. Please ensure all references are cleaned.

There is a number of things that can leak memory:
  - Async operations that have not finished (e.g. fs.readFile).
  - Timers not properly mocked (e.g. setInterval, setTimeout).
  - Keeping references to the global scope.

  at onResult (node_modules/@jest/core/build/TestScheduler.js:190:18)
  at node_modules/@jest/core/build/TestScheduler.js:304:17
  at node_modules/emittery/index.js:260:13
      at Array.map (<anonymous>)
  at Emittery.Typed.emit (node_modules/emittery/index.js:258:23)

Is there anyone who can explain this and know how solve this problem?

Thank you for your time.

Is there a free mapping software for high-precision maps based on lidar point cloud data and GPS?

An autonomous driving platform is being built recently. I need to create a high-resolution map myself. I installed 128-line lidar and high-precision inertial navigation system, but I don’t know how to use which tools to create a high-precision map. Is there a free mapping software for high-precision maps based on lidar point cloud data and GPS latitude and longitude information? Can Matlab do this?

No tests found with test runner 'JUnit 5'

I'm trying to make a JUnit tests for the first time for a basic class, but I can't seem to get my tests running. I've been looking for answers, but none of them seem to work for me. JUnit error

Andrew Mead expensify: test(jest) passes for any value(object) inside the toEqual argument?

This is my action generators:

export const addExpense = (expense) => ({
type: "ADD_EXPENSE",
expense,
});

export const startAddExpense = (expenseData={}) => {
return (dispatch) => {
const {
  description = "no description",
  note = "empty",
  amount = 0,
  createdOn = 0,
} = expenseData;
const expense = { description, note, amount, createdOn };
return database
  .ref("expenses")
  .push(expense)
  .then((ref) => {
    dispatch(
      addExpense({
        id: ref.key,
        ...expense,
      })
    );
  });
 };
};

Now my jest testing for startAddExpense action generator is:

test("startAddExpense: should add expense to database and store", (done) => {
const store = createMockStore({});
const expenseData = {
  description: "Mouse",
  amount: 3000,
  note: "This one is better",
  createdOn: 1000,
};

store
 .dispatch(startAddExpense(expenseData))
 .then(() => {
   const actions = store.getActions();
   // expect(actions[0]).toEqual({});
   //OR
   expect(actions[0]).toEqual({
     type: "ADD_EXPENSE",
     expense: {
       id: expect.any(String),
       ...expenseData,
     },
   });
   return database.ref(`expenses/${actions[0].expense.id}`).once("value");
 })
.then((dataSnapShot) => {
  // expect(dataSnapShot.val()).toEqual({});
  //OR
  expect(dataSnapShot.val()).toEqual({
    id: expect.any(String),
    ...expenseData,
  });
 });
done();
});

My question is why does the test(jest) passes for any value(object) inside the toEqual argument of both action[0] and dataSnapShot.val()??

You can see that in the above case,once i have passed an empty object(which has been muted) and another time i have passed the expected object, but the test passes for both the case. I even tried puuting any key-value pair inside the object and it still passes the test???

What advantage to you gain by testing with Appium?

I don't understand why I should choose appium for UI testing. The only real benefit I see is that it allows me to write tests in whatever language i'm comfortable with. But say I have an Android application that i want to test. Adding Appium as a way to UI test just seem to have overhead by having a HTTP client/server. Instead I can just use Espresso and write the test that way. This reduces the time it takes to run the tests since I don't need the Appium server/client. So should I only consider Appium if I want to test my app cross platform? But even then it seems more efficient to write iOS tests in Earl Grey and Android tests in Espresso

Will moving code from a thunk to a custom hook make my app's code harder to test?

When my app first renders, I need to set up some listeners (authentication listener and some global data that is necessary from the first render).

The stack I'm using: react, redux (thunks) and firebase.

I need to decide between the 2 following patterns on how to set up those listeners:

OPTION #1

Treat it as a regular thunk.

APP_THUNKS.ts

export const setAuthListener = (): APP_THUNK => {
  return async (dispatch,getState) => {
    // CODE TO LISTEN TO THE auth OBJECT FROM firebase
    // AND UPDATE THE STATE ON EVERY NEW snapshot
    dispatch(ACTIONS.UPDATE_AUTH(auth));
  };
};

App.tsx

const dispatch = useDispatch();

useEffect(() => {
  dispatch(setAuthListener());
},[dispatch]);

OPTION #2

Move the thunk code to a custom hook called: useAuthListener(). Example:

useAuthListener.ts

export const useAuthListener = () => {

  const dispatch = useDispatch();

  // BASICALLY I'LL RUN THE CODE FROM THE THUNK ABOVE IN THE FOLLOWING useEffect

  useEffect(() => {
    // CODE TO LISTEN TO THE auth OBJECT FROM firebase
    // AND UPDATE THE STATE ON EVERY NEW snapshot
    dispatch(ACTIONS.UPDATE_AUTH(auth));
  },[dispatch]);

};

Is one option better than the other? I'm asking this because I know this pattern might influence the "testability" of my code in the future, and I don't have much experience with testing. Should I favor one of these patterns over the other?

background Flask toyserver for testing

I need a flask server in multithreaded mode for testing, because some test routes are themselves calling other routes on the same server (thus, in the singlethreaded test_client they would hang). I thought the easiest solution would be to start the server manually in a background thread - however with this setup (as shown below) the test cases fail with ConnectionError - presumably because they don't wait for the server to finish set up.

What are the best practices to set up multithreaded testing with Flask? Should I continue with this approach and add additional handling to wait for a signal from the server to be ready? Add polling with timeout on the TestCase side?

main_test.py

import unittest
from threading import Thread
from main import app

def _run_test_server():
    app.run(host='localhost', port=3000, debug=True, threaded=True)

Thread(target=_run_test_server).start()
loader = unittest.TestLoader()
tests = loader.discover('.')
testRunner = unittest.runner.TextTestRunner()
testRunner.run(tests)

main.py

from flask import Flask

app = Flask(__name__)

@app.route('/')
def hello_world():
    return 'Hello, World!'

if __name__ == "__main__":
    app.run(host='localhost', port=3000, debug=True, threaded=True)

my_test_case.py

class MyTestCase(unittest.TestCase):
    """
    Baseclass for testcases
    """
    def fetch_response(self, path):
        return json.loads(
            requests.get(urllib.parse.urljoin(os.environ["ORIGIN_URI"], path)).data
        )

React Testing Library / Jest Update not wrapped in act(...)

I've been getting this error message regarding act while testing.

Warning: An update to EmployeesDashboard inside a test was not wrapped in act(...).

When testing, code that causes React state updates should be wrapped into act(...):

act(() => {
  /* fire events that update state */
});
/* assert on the output */


    in EmployeesDashboard (at EmployeesDashboard.test.tsx:69)"

I can't figure out why, since it seems I've already wrapped everything in act... What in here should I also be wrapping in act? Any help would be greatly appreciated since I'm fairly new to testing.

import React from 'react';
import { mocked } from 'ts-jest/utils';
import { act } from 'react-dom/test-utils';
import { EmployeesDashboard } from '../EmployeesDashboard';
import {
  render,
  wait,
  waitForElementToBeRemoved,
} from '@testing-library/react';

import { Employee } from '@neurocann-ui/types';
import { getAll } from '@neurocann-ui/api';

jest.mock('@neurocann-ui/api');


const fakeEmployees: Employee[] = [
  {
    email: 'joni@gmail.com',
    phone: '720-555-555',
    mailingAddress: {
      type: 'home',
      streetOne: '390 Blake St',
      streetTwo: 'Apt 11',
      city: 'Denver',
      state: 'CO',
      zip: '90203',
    },
    name: 'Joni Baez',
    permissions: ['Employee'],
    hireDate: new Date('2020-09-19T05:13:12.923Z'),
  },
  {
    email: 'joni22@gmail.com',
    phone: '720-555-555',
    mailingAddress: {
      type: 'home',
      streetOne: '390 Blake St',
      streetTwo: 'Apt 11',
      city: 'Denver',
      state: 'CO',
      zip: '90203',
    },
    name: 'Joni Baez',
    permissions: ['Employee'],
    hireDate: new Date('2020-09-19T05:13:12.923Z'),
  },
];

mocked(getAll).mockImplementation(
  (): Promise<any> => {
    return Promise.resolve({ data: fakeEmployees });
  },
);
describe('EmployeesDashboard', () => {
  it('renders progress loader', () => {
    act(() => {
        render(<EmployeesDashboard />)
    });
    const employeesDashboard = document.getElementById(
      'simple-circular-progress',
    );
    expect(employeesDashboard).toBeInTheDocument();
    expect(employeesDashboard).toMatchSnapshot();
  });
  it('renders a table', async () => {
    await act(() => { render(<EmployeesDashboard />) });
    const table = document.getElementById('employees-table');
    expect(table).toBeInTheDocument();
  });
});

Getting deprecated error when running tests, even tho the test results are good

❯ rails test The dependency tzinfo-data (>= 0) will be unused by any of the platforms Bundler is installing for. Bundler is installing for ruby but the dependency is only for x86-mingw32, x86-mswin32, x64-mingw32, java. To add those platforms to the bundle, run bundle lock --add-platform x86-mingw32 x86-mswin32 x64-mingw32 java. Running via Spring preloader in process 4541 Run options: --seed 56928

Running:

/home/lcsmachado/.rvm/gems/ruby-2.7.0/gems/sprockets-rails-3.2.1/lib/sprockets/rails/helper.rb:355: warning: Using the last argument as keyword parameters is deprecated; maybe ** should be added to the call /home/lcsmachado/.rvm/gems/ruby-2.7.0/gems/sprockets-4.0.2/lib/sprockets/base.rb:118: warning: The called method []' is defined here ./home/lcsmachado/.rvm/gems/ruby-2.7.0/gems/sprockets-rails-3.2.1/lib/sprockets/rails/helper.rb:355: warning: Using the last argument as keyword parameters is deprecated; maybe ** should be added to the call /home/lcsmachado/.rvm/gems/ruby-2.7.0/gems/sprockets-4.0.2/lib/sprockets/base.rb:118: warning: The called method []' is defined here ........

Finished in 0.650968s, 13.8256 runs/s, 15.3617 assertions/s. 9 runs, 10 assertions, 0 failures, 0 errors, 0 skips

The test result is okay, but i don't know the cause of those errors messages, in my opinion it is on rails root files, so i prefer to not change then, but i'm actually learning rails and don't know much.

Shared test resources across modules

In my project I currently have the following structure:

Project:

|->module1

|->module2

|->module3

|->src

The project code i.e. the code in src folder depends on all 3 modules. Both module2 and module3 depend on module1 (module1 is basically a bunch of sanity check code utilised wherever needed) and module3 depends on module2.

There are some test resource files that are needed for both module2's and module3's testing as well as for the tests in project's src.

My module2 is basically a utility that parses an input json file and translates them in to UDF classes and module3 takes those objects and handles the processing on them.

I haven't found a solution online for a structure like mine.

P.S. I am also open to suggestions about my overall project structure, if that will simplify things.

Best-Practice and Tooling for Machine Learning Regression Testing

We have a set of classifiers written in python which mostly classifies image data. We would like to ensure that new versions of a classifier, test data or external libraries does not lower the quality of our classifiers. For quality metrics we use ram usage, cpu time and of course the classification accuracy. Testing and analysing the models usually takes several hours and the resulting models are up to a gigabyte in size.

We have to test different variants, e.g. running on cpu, running on gpu, running on linux, running on windows. Then, we want to have some kind of report, to inspect and evaluate the metrics.

I am pretty new to this topic and don't know where to start. Is there a common tooling or approach to:

  • Distribute the test task which large amount of test data (10 GB) and the classifiers on different nodes
  • Execute the tests
  • Store the test result (i.e. the metrics) in a (standardise?) file format
  • Gather the test result from the nodes and generate and publish a report
  • And also plot the result over time

We are using gitlab-ci as continuous integration. And would like to integrate this into our release pipeline.

Of course I tried to google this. However, maybe I am lacking of the correct wording but I could find' any suitable answer.