samedi 30 novembre 2019

unit testing a Function inside a div

I have a function inside a div component as in below code.

<div className='classA'>
 {
  () => {
   someData.map((item) => {
    <div className='classB'> {item.name} </div>
  })
 }
}
</div>

can someone please help in unit testing that function using enzyme.

Test suites always failing - Randomly

I have being studing Jest and i currently doing a lot of integrations tests on my api, using sqlite for the db test.

But something strange is that sometimes a test go well and then fail again. I don't know if the reason for that is related to some truncate that i doing on the db after run a test suite.

Here's the repo with the tests: https://github.com/LauraBeatris/gympoint-api

If someone run yarn test probably will have the user test suite failed and also the registrations test suite failed.

I will appreciate a help!

expected at least 1 bean which qualifies as autowire candidate

I am very new to spring and trying to developing one application in spring boot.I know this is duplicate question but I didnt find any solution to my problem.. Im not able to run spring tests for my aplication.

Link for repository bellow with added jarvis to check build status:

https://github.com/Strusinio/TAU--GIn-I.7-AI-12c--repozytorium

Thanks in advance for any help

Regular expression not working in Jest moduleNameMapper

I am trying to test a Vue CLI project with vue-test-utils and Jest. I am using some icons from vue-material-design-icons, but they are not getting transformed when I run Jest. This is the error that I get when I run Jest:

    /path/to/node_modules/vue-material-design-icons/Sitemap.vue:1
    ({"Object.<anonymous>":function(module,exports,require,__dirname,__filename,global,jest) 
    {<template functional>
     ^

    SyntaxError: Unexpected token <

      101 | <script>
      102 | import { mapGetters, mapActions } from "vuex";
    > 103 | import ControllerIcon from "vue-material-design-icons/Sitemap.vue";
          | ^
      104 | import SensorIcon from "vue-material-design-icons/AccessPoint.vue";
      105 | import AlphaBoostIcon from "vue-material-design-icons/Alpha.vue";
      106 | 

After reading the README on the jest-transform-stub GitHub page, I tried to stub out the vue-material-design-icons in my jest.config.js file with this configuration:

...
moduleNameMapper: {
  "/vue-material-design-icons\/[\w]+.vue/": "jest-transform-stub",
  "^@/(.*)$": "<rootDir>/src/$1"
},
...

...but it is not working. I have tested the regular expression in both https://regex101.com/ and https://www.regextester.com/ and the regular expression works as expected in both sites.

If I hard code the file path into the config, then it works:

...
moduleNameMapper: {
  "vue-material-design-icons/Sitemap.vue": "jest-transform-stub",
  "^@/(.*)$": "<rootDir>/src/$1"
},
...

Obviously I don't want to hard code the file path for every icon in the project, though.

Does anyone know why the regular expression is not working?

Thank you in advance!

Test on Room Database, iterable containing "Item" but !item 0 was "Item"

Goog afternoon,

I search on by database to test on a specific class Test, when I put a new item, if the list contains the item.

@Test
public void insertAndDeleteTask() throws InterruptedException {


        Project projectTartampion = new Project(1L, "Projet Tartampion", 0xFFEADAD1);
        long projectID1L = projectDao.inserProject(projectTartampion);
        Task task1L = new Task(1,projectID1L,"Test",3);
        taskDao.insertTask(task1L);


    List<Project> allProjects = LiveDataTestUtil.getValue(projectDao.getAllProjects());
    assertNotNull(allProjects);
    assertFalse(allProjects.isEmpty());

    List<Task> allTasks = LiveDataTestUtil.getValue(taskDao.getAllTasks());
    assertNotNull(allTasks);
    assertFalse(allTasks.isEmpty());

    Log.i("TAG","***********************************"+LiveDataTestUtil.getValue(taskDao.getAllTasks()));

    assertThat(allTasks, contains(task1L));//HERE TO TEST IF allTasks contains task1L/////

Error result on log Test :

iterable containing info Task{id=1, projectId=1, name='Test', creationTimestamp=3} but: item 0: was Task{id=1, projectId=1, name='Test', creationTimestamp=3}

But I don't exactly understand what's happening, because when I use the Log.i to see if my item is in the list, it is on.

Thanks for your answer

Why NodeJS test framework like supertest needs the instance of the server?

Why NodeJS test framework like supertest needs the instance of the server to execute api calls?

From their example:

const request = require('supertest');
const express = require('express');

const app = express();

app.get('/user', function(req, res) {
  res.status(200).json({ name: 'john' });
});

request(app)
  .get('/user')
  .expect('Content-Type', /json/)
  .expect('Content-Length', '15')
  .expect(200)
  .end(function(err, res) {
    if (err) throw err;
  });

As an alternative, why can't they use Axios? What is the advantage of using my web-server instance?

vendredi 29 novembre 2019

Dotenv loads envs but Jest doesn't read them

BACKGROUND

I'm using process.env.<ENV NAME> to set variables in some classes. I need to set them in the tests for the class variables to be set otherwise the test fails.

Currently, I'm setting the variables in a beforeAll() hook. However, there are many test files in which I'll have to set these envs. I don't want to replicate this code throughout all these files if I don't have to.

I decided it would be a good idea to set them up prior to each test through a Jest set-up file. In jest.config.js I added setupFiles: ['<rootDir>/jestSetupTest.js']. Inside this file I added require('dotenv').config(). The .env file is in the root directory.

I've got test files in a couple of different directories: ./src/graphql/__tests__ and ./src/utils/__tests__.

PROBLEM

The envs are being set but they are not being read by any of the Jest tests that are running.

ATTEMPTED

I looked into this issue which got me as far as being able to set-up the env vars, but it has nothing about issues using them.

I've added require('dotenv').config() to the test files that use the envs, but that still doesn't work. This surprised me I thought at least this would set the envs.

I set --debug on Jest but that doesn't show whether envs were set or not.

QUESTIONS

Does anyone know what is going on? Or how I can further diagnose this issue?

I get the impression envs can be set and used in Jest tests, as per the SO post above. Why am I not able to use them? Could it be a config issue with the way my files are set-up?

What does non - testable mean in this code example?

I need to analyze a code after playing a code and see if I have full coverage. Some part of the report is shown below:

    game->errorCode = ERROR_NO_MEMORY;                      
    #pragma RVS justification( "COV_STATEMENTS", "not testable");                       
}                       
else {                      
    /* Initialize platform */       
    game->errorCode = platformInit(game);
    if (game->errorCode == ERROR_NONE) {            
        /* If platform was correctly initialized, start the game */     
        startGame(game);        
    } else {                
        #pragma RVS justification( "COV_MCDC", "1:game->errorCode == ERROR_NONE", "not testable");
        #pragma RVS justification( "COV_DECISIONS", "not testable");                
    }

What does it mean when they say "not testable"? Is it a part of code that cannot be covered? And if so why?

How to test a model that has_attached_file

I'm testing a controller that is using a Product model which has "has_attached_file" as :cover.

As you know, cover is not a column in product table, so test is failing here.

image_tag(product.cover.url(:medium))

My question is, how can I create fixtures for those attached files and associate them to certain product.

This is product model:

class Product < ApplicationRecord

  has_attached_file :cover, styles: { large: "600x600", medium:"300x300", thumb: "150x150" }

...

Thank you all!

WithClauseError, elixir unit tests match with (:ok, value}

I'm trying to write safe functional code in elixir and use unit tests to confirm my code works correctly. Here is the controller code:

def calculate_price(start_time, end_time, zone, payment_type) do
    with( {:ok} <- validate_times(start_time, end_time),
          {:ok} <- validate_zone(zone),
          {:ok} <- validate_payment_type(payment_type)
    ) do
      elapsed_minutes = div(Time.diff(end_time, start_time), 60)
      cond do
        zone == "A" && elapsed_minutes <= 15 -> {:ok, 0}
        zone == "B" && elapsed_minutes <= 90 -> {:ok, 0}
        zone == "A" && elapsed_minutes > 15 && payment_type == "hourly" -> {:ok, calc(elapsed_minutes - 15, 2, 60)}
        zone == "B" && elapsed_minutes > 90 && payment_type == "hourly" -> {:ok, calc(elapsed_minutes - 90, 1, 60)}
        zone == "A" && elapsed_minutes > 15 && payment_type == "real"   -> {:ok, calc(elapsed_minutes - 15, 0.16, 5)}
        zone == "B" && elapsed_minutes > 90 && payment_type == "real"   -> {:ok, calc(elapsed_minutes - 90, 0.08, 5)}
      end
    else
      {:error, error} -> IO.puts error
    end
  end

  defp validate_times(start_time, end_time) when end_time > start_time, do: :ok
  defp validate_times(_start_time, _end_time), do: {:error, "The start/end time is wrong"}

  defp validate_zone(zone) when zone == "A" or zone == "B", do: :ok
  defp validate_zone(_zone), do: {:error, "The zone is wrong"}

  defp validate_payment_type(payment_type) when payment_type == "hourly" or payment_type == "real", do: :ok
  defp validate_payment_type(_payment_type), do: {:error, "The payment type is wrong"}

  defp calc(minutes_to_pay, price_per_minutes, minutes_per_price_increment) do
    cond do
      rem(minutes_to_pay, minutes_per_price_increment) > 0 ->
        (div(minutes_to_pay, minutes_per_price_increment) + 1) * price_per_minutes
      true -> div(minutes_to_pay, minutes_per_price_increment) * price_per_minutes
    end
  end

controller_test code:

test "calculate price; zone: B, paymentType: real" do
    # 4 hours and 30 minute difference
    startTime = ~T[12:00:00.000]
    endTime = ~T[16:30:00.000]
    zone = "B"
    paymentType = "real"

   assert {:ok, 2.88} == FindmyparkingWeb.ReservationController.calculate_price(startTime, endTime, zone, paymentType)

  end

For this code, I'm tring to validate the correct parameters are passed in so that on the happy path of my code I return a result of {:ok, value}. If the parameters are wrong I want to know why the error happened. Currently I am just printing to command line, but eventually I want to return {:error, reason}. Just putting {:error, error} in the else clause caused a different error.

The result of the test case is: ** (WithClauseError) no with clause matching: :ok

What I think this means is that my calculate_price function is returning {:ok}. I don't understand why the value inside the with clause is being returned and not the values in the do or else clause!

My elixir version is 1.9.1.

Test failing when executed in a different order

When I execute this program:

use Test;
use NativeCall;

constant LIB  = ('gsl', v23);

sub gsl_sf_airy_Ai(num64 $x, uint32 $mode --> num64) is native(LIB) is export { * }
sub Ai(Numeric $x, UInt $mode --> Num) is export { gsl_sf_airy_Ai($x.Num, $mode) }

ok Ai(0, 0) == 0.3550280538878172, 'Ai 1';
ok gsl_sf_airy_Ai(0e0, 0) == 0.3550280538878172, 'Ai 2';

the tests work fine, even if I swap the two "ok" tests this way:

ok gsl_sf_airy_Ai(0e0, 0) == 0.3550280538878172, 'Ai 2';
ok Ai(0, 0) == 0.3550280538878172, 'Ai 1';

If I move the declarations to a module:

unit module mymodule;
use NativeCall;

constant LIB  = ('gsl', v23);

sub gsl_sf_airy_Ai(num64 $x, uint32 $mode --> num64) is native(LIB) is export { * }
sub Ai(Numeric $x, UInt $mode --> Num) is export { gsl_sf_airy_Ai($x.Num, $mode) }

and write a test program:

use Test;
use lib '.';
use mymodule;

ok Ai(0, 0) == 0.3550280538878172, 'Ai 1';
ok gsl_sf_airy_Ai(0e0, 0) == 0.3550280538878172, 'Ai 2';

again the two tests are executed without errors, but if I swap the last two lines:

ok gsl_sf_airy_Ai(0e0, 0) == 0.3550280538878172, 'Ai 2';
ok Ai(0, 0) == 0.3550280538878172, 'Ai 1';

I get this error: Type check failed for return value; expected Num but got Whatever (*) and I don't understand why. I even suspected a possible memory corruption, so I executed the test program using valgrind, but apparently there's nothing wrong in that department. Any hint?

Why Molecule is not able to start a docker container

I am using Molecule to test my ansible role. Before rebooting my server was working fine. However, after, When I run molecule create

It is skipping the create process: Skipping, instances already created. However, nothing is running:

(myenv)[root]# docker container ls
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

When running molecule converge. I am getting this error:

TASK [Gathering Facts] *********************************************************

fatal: [test_instance]: UNREACHABLE! => {"changed": false, "msg": "Authentication or permission failure.
In some cases, you may have been able to authenticate and did not have permissions on the target directory. 

Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\". Failed command was: 
( umask 77 && mkdir -p \"` echo ~/.ansible/tmp/ansible-tmp-1575033184.79-237504774558686 `\" && echo
ansible-tmp-1575033184.79-237504774558686=\"` echo ~/.ansible/tmp/ansible-tmp-1575033184.79-237504774558686 `\" ), 
exited with result 1", "unreachable": true} 

Any idea on how to solve this problem?

How do I write CustomAssertion using FluentAssertions?

There is official example how to create CustomAssertion at FluentAssertions docs, however my attempt to apply it fails. Here's the code:

public abstract class BaseTest
{
    public List<int> TestList = new List<int>() { 1, 2, 3 };
}

public class Test : BaseTest { }


public class TestAssertions
{
    private readonly BaseTest test;

    public TestAssertions(BaseTest test)
    {
        this.test = test;
    }

    [CustomAssertion]
    public void BeWorking(string because = "", params object[] becauseArgs)
    {
        foreach (int num in test.TestList)
        {
            num.Should().BeGreaterThan(0, because, becauseArgs);
        }
    }
}

public class CustomTest
{
    [Fact]
    public void TryMe()
    {
        Test test = new Test();
        test.Should().BeWorking(); // error here
    }
}

I'm getting compile error:

CS1061 'ObjectAssertions' does not contain a definition for 'BeWorking' and no accessible extension method 'BeWorking' accepting a first argument of type 'ObjectAssertions' could be found (are you missing a using directive or an assembly reference?)

I also tried to move BeWorking from TestAssertions to BaseTest but it still won't work. What am I missing and how do I make it work?

In rspec file, I am getting => Error: ActionController::RoutingError: No route matches xxxxxxx despite valid routing

I am very very new to testing and new to rails (less than 1 year of experience). Please keep this in view before answering.

I have a model recipe which belongs to a source and source belongs to client.

Route is:

client_source_recipes GET /clients/:client_id/sources/:source_id/recipes(.:format) recipes#index

I am trying to test this:

RSpec.describe RecipesController, :type => :controller do

# Prerequisites go here (mentioned at the end of the question)

describe "GET #index" do
    it "assigns all recipes as @recipes" do
      recipe = Recipe.create! valid_attributes
      get :index, params: { client_id: client.id, source_id: source.id, locale: 'cs' }, session: valid_session
      expect(assigns(:recipes)).to eq([recipe])

      # Commented below are different ways I tried but failed:

      # visit client_source_recipes_path(client.id, source.id, locale: 'cs')
      # visit "/client/#{client.id}/source/#{source.id}/recipes?locale=cs"
      # get client_source_recipes_path, params: { client_id: client.id, source_id: source.id, locale: 'cs' }, session: valid_session
      # get client_source_recipes_path(client.id, source.id, locale: 'cs')
    end
  end

Prerequisites for test:

  login_user         # defined - works well

  let(:client) { create :client }                                         # defined in factories 
  let(:source) { create :source, warehouse_id: 1, client_id: client.id }  # defined in factories

  let(:valid_attributes) {
    { name: "name", source_id: source.id }
  }

  let(:valid_session) { {"warden.user.user.key" => session["warden.user.user.key"]} }   # works well in other tests

Why do I get the error of route when same route is being used everywhere else?

Errors:

 Error: ActionController::RoutingError: No route matches {:controller=>"recipes", :action=>"/clients/1/sources/1/recipes?locale=cs"}

 Error: ActionController::RoutingError: No route matches {:controller=>"recipes", :action=>"/clients/1/sources/1/recipes"}

# etc. etc. i.e. errors are more or less the same

Thanks in advance.

Can Google Mock EXPECT_CALL also just store the parameter?

For Google mock I can verify any value with EXPECT_CALL(class-instance, function(_));

In this case I need to store the parameter _ in my test for using further in the test. It is not a value, but a pointer to a function. How to do this?

What I need in pseudo code:

TEST(myFixture, test001)
EXPECT_CALL(class-instance, function(_)).do(this->pointer = %1);
…
// end of test
EXPECT_EQ(this->pointer(2), 4);

What should be instead of %1?
note: "this->" is just to emphasize it is a local parameter.

Performing a Heteroscedasticity test

I try to test the heteroskedasticity of my model. However, it still returns an error saying "PanelEffectsResults' object has no attribute 'resid'". I don't know how to fix this.

I used the following code to build the model:

#training and testing set
X_train, X_test, Y_train, Y_test = train_test_split(X,Y1, test_size =0.20, random_state= 8)

#creating an instance 

regression_model= LinearRegression()

#fit

regression_model.fit(X_train, Y_train)

#predict
y_predict = regression_model.predict(X_test)



#Add the constant
X = sm.add_constant(X)
model = PanelOLS(Y,X, entity_effects=True)
est = model.fit()
est

#Homoscedasticity

diag.het_breuschpagan(est.resid, est.model.exog, retres=False)

How do I get my resid?

How to check font text on Flutter golden test

I'm making a package for vertical Mongolian text. I have a custom widget that needs a special font to display. I'm trying to write a test that shows the Mongolian text has rendered correctly.

On the emulator it looks like this:

But the golden file looks like this:

I can't verify that the Mongolian is getting rendered correctly if the golden test is just giving me tofu.

This is my test:

testWidgets('MongolText renders font', (WidgetTester tester) async {

  await tester.pumpWidget(
    MaterialApp(
      home: Scaffold(
          appBar: AppBar(title: Text('My App')),
          body: Stack(
            children: <Widget>[
              Center(
                child: MongolText('ᠮᠣᠩᠭᠣᠯ'),
              ),
            ],
          )
      ),
    ),
  );

  await tester.pumpAndSettle();

  await expectLater(
    find.byType(MaterialApp),
    matchesGoldenFile('golden-file.png'),
  );
});

Is there any way to fix this?

I've read these two articles about golden tests:

"ERROR: Validation error" message when executing two Sequelize commands in "pretest" script

I'm writing tests for my project. It uses Sequelize and I thought about doing the following:

"pretest": "NODE_ENV=testing yarn sequelize db:migrate && yarn sequelize db:seed:all",
"test": "mocha --require @babel/register 'src/tests/**/*.spec.js'",
"posttest": "NODE_ENV=testing yarn sequelize db:migrate:undo:all"

But the following shows:

❯ yarn test     
yarn run v1.19.2
$ NODE_ENV=testing yarn sequelize db:migrate && yarn sequelize db:seed:all
$ /home/gabriel/Workspace/graphql-apollo/node_modules/.bin/sequelize db:migrate

Sequelize CLI [Node: 12.13.1, CLI: 5.5.1, ORM: 5.21.2]

Loaded configuration file "src/config/database.js".
== 20191123132531-create-users: migrating =======
== 20191123132531-create-users: migrated (0.047s)

== 20191123132658-create-messages: migrating =======
== 20191123132658-create-messages: migrated (0.028s)

$ /home/gabriel/Workspace/graphql-apollo/node_modules/.bin/sequelize db:seed:all

Sequelize CLI [Node: 12.13.1, CLI: 5.5.1, ORM: 5.21.2]

Loaded configuration file "src/config/database.js".
== 20191123132945-users: migrating =======

ERROR: Validation error

error Command failed with exit code 1.

If I execute the migration and seeding command separately, it works fine. Why's this ERROR: Validation error happening when having them in one line?

Any suggestion of how i can develop a plateform like codinGame or DevSkiller?

I want to develop a plateform for coding Test (especially for PHP) like CodinGame. The goal : having a code editor, in that edito we can put ome code (with tree directories) and a button to run this code. The result of code running is displayed in the same page in a small bloc of the page. I tried to integrate VSCode in the browser, but i'm not sure this is the best idea. Any suggestion please ?

How can i test with jest a react component who use context api?

Ok so i'm quite new on react and i started jest fews days ago after i finished my V1 app. I want to make something very nice to be proud of my code!

So i started jest to protect my app against wrong push, etc...

I wondering how to test this code with jest:

import React, { Component } from 'react';
import SignPage from '../Connection/SignPage';
import OgdpcQuery from '../Ogdpc/OgdpcQuery/OgdpcQuery';
import {StateConsumer} from '../../api/context';
import {StateConsumer} from '../../api/context';

export default class Landing extends Component {

render() {
    return (
        <StateConsumer>
            {(value) => {
                if (value.connected === false){
                return (
                    <div className="container">
                        <div className="row">
                            {/* State information on the profile */}
                            <SignPage/>
                        </div>
                    </div>
                )}
                else if(value.connected === true){
                    return (
                        <div className="container">
                            <h1>Connected</h1>
                        </div>
                    )
                } 
            }}
        </StateConsumer>
    )
}
}

The thing blocking me it's the with the "value" (more precisely value.connected), how can I simulate it on jest and after check if contain for exemple. My app is fill of stuffs like this, so i can't move forward if i don't understand how to do it on simple page like this.

I read a lot of docs but i did not understand really how it work! Jest look really hard to me at first, init mock and fake function look weird to me. I think after fews weeks on it this will be more obvious for me! So if you have some advises, docs very understable novice or help it will be awesome

Sorry if this post look pretty bad because of my english or formulation. thx!

Error when unit testing on angular component that uses openlayers 6.1.1

I'm trying to run test on a angular app. The tests fails on an external module openlayers import.

I have an error when i want to import TileLayer and I running tests (This code is on a service that is included by dependency injection. Also I'm simply trying to test the creation of the gis.component.ts).

gis.component.ts :

export class GisComponent implements OnInit {
  constructor(private readonly gisService: GisService) {}

  ngOnInit() {
    this.gisService.renderMapOnHTML('map-container');
  }
}

gis.service.ts :

import TileLayer from 'ol/layer/Tile';
//...
@Injectable({
  providedIn: 'root',
})
export class GisService {
  //...
}

Error: The import works good but it fail when i run tests

 FAIL  apps/aims/src/app/components/shared/gis/gis.component.spec.ts
  ● Test suite failed to run

    /home/mehdi/Documents/Developpement/eams/front/workspace/node_modules/ol/layer/Tile.js:17
    import BaseTileLayer from './BaseTile.js';
           ^^^^^^^^^^^^^

    SyntaxError: Unexpected identifier
      1 | import { Injectable } from '@angular/core';
    > 2 | import TileLayer from 'ol/layer/Tile';
        | ^
      3 | import Map from 'ol/Map';
      4 | import OSM from 'ol/source/OSM';
      5 | import View from 'ol/View';

      at ScriptTransformer._transformAndBuildScript (../../node_modules/@jest/transform/build/ScriptTransformer.js:537:17)
      at ScriptTransformer.transform (../../node_modules/@jest/transform/build/ScriptTransformer.js:579:25)
      at Object.<anonymous> (src/app/services/gis/gis.service.ts:2:1)

I'm using :
- Angular 8.2.14
- Jest 8.4.6
- OpenLayers 6.1.1

How to test not equal with matcher in flutter

I'm doing testing on render objects in Flutter. I'd like to check for inequality like this (simplified):

testWidgets('render object heights not equal', (WidgetTester tester) async {

  final renderObjectOneHeight = 10;
  final renderObjectTwoHeight = 11;

  expect(renderObjectOneHeight, notEqual(renderObjectTwoHeight));
});

I made up notEqual because it doesn't exist. This doesn't work either:

  • !equals

I found a solution that works so I am posting my answer below Q&A style. I welcome any better solutions, though.

How can I printing messages from console of Chromedriver during running code?

I have testing code and I need printing messages from driver and browser. For this I have method, but I don't how can I printing messages durinig running #test code. For test I am useing Chrome driver.

from selenium import webdriver
from termcolor import colored
import time

driver = webdriver.Chrome(path)
#d
for ansDr in driver.get_log('driver'):
    if 'INFO' in ansDr['level']:
        print(colored('INFO: ' + ansDr['message'], 'blue'))
    elif 'WARNING' in ansDr['level']:
        print(colored('WARNING: ' + ansDr['message'], 'yellow'))
    elif 'ERROR' in ansDr['level']:
        print(colored('ERROR: ',  ansDr['message'], 'red'))
    else:
        print(colored(ansDr, 'pink'))

jeudi 28 novembre 2019

i set path of jdk and run jmeter the error occur . How to run jmeter

Not able to find Java executable or version. Please check your Java Installation. errorlevel=2

How to test the login in Cypress with Microsoft Authentication Library (MSAL)

Trying to test React application using Cypress, Getting the issues while loging in.

Application we are using MASL third party login for Authentication.

Getting the following issues: --> Couldn't control Microsoft Authentication Library (MSAL) login popup window using Cypress. --> #access_token popup is not closing in Cypress automatically (in actual application closing automatically). --> To make all remainig test cases to wait until login popup is closed and redirected back to application.

Please help me if you have any thoughts/references.

Thanks

Droping and re-creating database for testing

I have a .NET core 3 project with SQL Server and I'm trying to write a test for an initialize service which will run only the first time database is created.

There is one database with a lot of dbcontexts. What I'm trying to do is to drop the database and recreate it, then run the service on it and check if it worked properly.

What I'm doing now is calling the ensureDeleted() and ensureCreated() on all of the contexts in a foreach loop and its kind of working for an empty database.

My question is:

  1. Will the ensureDeleted() fail if there are data in the db that are dependent to each other, since each context only deletes some of the database tables?

  2. If so how can I prevent this? And is there a better way to do it?

  3. Can I just drop the database and recreate it in .NET core? If I can is it a good choice considering now with ensureDeleted() I'm only deleting the tables not the whole Database

*Also I tried doing my test with sqllite and it didn't work out for me since its very different then server (it doesn't have schema).

In Pytorch, how to test simple image with my loaded model?

I made a alphabet classification CNN model using Pytorch, and then use that model to test it with a single image that I've never seen before. I extracted a bounding box in my handwriting image with opencv, but I don't know how to apply it to the model.

enter image description here

import cv2
import matplotlib.image as mpimg
im = cv2.imread('/content/drive/My Drive/my_handwritten.jpg')

gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (5, 5), 0)
thresh = cv2.adaptiveThreshold(blur, 255, 1, 1, 11, 2)

contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[1]
rects=[]

for cnt in contours:
  x, y, w, h = cv2.boundingRect(cnt)
  if h < 20: continue 
  red = (0, 0, 255)
  cv2.rectangle(im, (x, y), (x+w, y+h), red, 2)
  rects.append((x,y,w,h))

cv2.imwrite('my_handwritten_bounding.png', im) 

img_result = []
img_for_class = im.copy()

margin_pixel = 60

for rect in rects:
    #[y:y+h, x:x+w]
    img_result.append(
        img_for_class[rect[1]-margin_pixel : rect[1]+rect[3]+margin_pixel, 
                      rect[0]-margin_pixel : rect[0]+rect[2]+margin_pixel])

    # Draw the rectangles
    cv2.rectangle(im, (rect[0], rect[1]), 
                  (rect[0] + rect[2], rect[1] + rect[3]), (0, 0, 255), 2) 

count = 0
nrows = 4
ncols = 7

plt.figure(figsize=(12,8))

for n in img_result:
    count += 1
    plt.subplot(nrows, ncols, count)
    plt.imshow(cv2.resize(n,(28,28)), cmap='Greys', interpolation='nearest')

plt.tight_layout()
plt.show()

Spring boot test unable to autowire service class

I am attempting to create a Spring Boot test class which should create the Spring context and autowire the service class for me to test.

This is the error I am getting:

Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'com.gobsmack.gobs.base.service.FileImportService' available: expected at least 1 bean which qualifies as autowire candidate. Dependency annotations: {@org.springframework.beans.factory.annotation.Autowired(required=true)}

The file structue:

enter image description here

The Test class:

package com.example.gobs.base.service;

import com.example.gobs.base.entity.FileImportEntity;
import com.example.gobs.base.enums.FileImportType;
import lombok.val;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.autoconfigure.orm.jpa.DataJpaTest;
import org.springframework.test.context.junit4.SpringRunner;

import java.util.Date;

import static org.assertj.core.api.AssertionsForClassTypes.assertThat;

@DataJpaTest
@RunWith(SpringRunner.class)
public class FileImportServiceTest {

    @Autowired
    private FileImportService fileImportService;

    private FileImportEntity entity;

The Main application class:

package com.example.gobs.base;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

/**
 * Used only for testing.
 */
@SpringBootApplication
public class Main {
    public static void main(String[] args) {
        SpringApplication.run(Main.class, args);
    }
}

FileImportService interface:

package com.example.gobs.base.service;

import com.example.gobs.base.entity.FileImportEntity;
import com.example.gobs.base.enums.FileImportType;

import java.util.List;

public interface FileImportService {

    /**
     * List all {@link FileImportEntity}s.

Which is implemented by:

package com.example.gobs.base.service.impl;

import com.example.gobs.base.entity.FileImportEntity;
import com.example.gobs.base.enums.FileImportType;
import com.example.gobs.base.repository.FileImportRepository;
import com.example.gobs.base.service.FileImportService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;

import java.util.List;

@Service
@Transactional
public class FileImportServiceImpl implements FileImportService {

    @Autowired
    private FileImportRepository repository;

    @Override
    public List<FileImportEntity> listAllFileImportsByType(FileImportType type) {
        return repository.findAllByType(type.name());
    }

Why can it not find the implementation?

How to use Protractor to find a empty cell in a table

If the field is a non-empty cell, I can use filter or by.cssContainingText to find the td. But what if the the td is empty?

For example, the following table has two exactly same row. each row has 5 p-editable-column. But all columns are empty, so how can I select a specific column? e.g I want to select the second p-editable-column on the first row.

<tbody class="p-datatable-tbody">
    <tr class="p-datatable-row blackFont" draggable="false" style="height: 28px;">
        <td class="" style="min-width: 2.6em; width: 2.6em; padding: 0px; border-spacing: 0px;"></td>
        <td class="" style="min-width: 2.8em; width: 2.8em; padding: 0px; border-spacing: 0px;"><i aria-hidden="true"
                class="file outline vertically flipped icon link noteButton"></i></td>
        <td class="p-editable-column" style="width: 5em; white-space: pre-line;"><a tabindex="0"
                class="p-cell-editor-key-helper p-hidden-accessible"><span></span></a></td>
        <td class="p-editable-column" style="width: 5em; white-space: pre-line;"><a tabindex="0"
                class="p-cell-editor-key-helper p-hidden-accessible"><span></span></a></td>
        <td class="p-editable-column" style="width: 5em; white-space: pre-line;"><a tabindex="0"
                class="p-cell-editor-key-helper p-hidden-accessible"><span></span></a></td>
        <td class="p-editable-column" style="width: 5em; white-space: pre-line;"><a tabindex="0"
                class="p-cell-editor-key-helper p-hidden-accessible"><span></span></a></td>
        <td class="p-editable-column" style="width: 5em; white-space: pre-line;"><a tabindex="0"
                class="p-cell-editor-key-helper p-hidden-accessible"><span></span></a></td>
    </tr>
    <tr class="p-datatable-row blackFont" draggable="false" style="height: 28px;">
        <td class="" style="min-width: 2.6em; width: 2.6em; padding: 0px; border-spacing: 0px;"></td>
        <td class="" style="min-width: 2.8em; width: 2.8em; padding: 0px; border-spacing: 0px;"><i aria-hidden="true"
                class="file outline vertically flipped icon link noteButton"></i></td>
        <td class="p-editable-column" style="width: 5em; white-space: pre-line;"><a tabindex="0"
                class="p-cell-editor-key-helper p-hidden-accessible"><span></span></a></td>
        <td class="p-editable-column" style="width: 5em; white-space: pre-line;"><a tabindex="0"
                class="p-cell-editor-key-helper p-hidden-accessible"><span></span></a></td>
        <td class="p-editable-column" style="width: 5em; white-space: pre-line;"><a tabindex="0"
                class="p-cell-editor-key-helper p-hidden-accessible"><span></span></a></td>
        <td class="p-editable-column" style="width: 5em; white-space: pre-line;"><a tabindex="0"
                class="p-cell-editor-key-helper p-hidden-accessible"><span></span></a></td>
        <td class="p-editable-column" style="width: 5em; white-space: pre-line;"><a tabindex="0"
                class="p-cell-editor-key-helper p-hidden-accessible"><span></span></a></td>
    </tr>
</tbody>

Reason to use Postman for Django Rest Framework

I'm used to test Django Rest Framework apps with the test tools available directly in Django and DRF. It's possible to setup a dummy client and expose all the REST methods. At the same time, I see many posts talking about Postman for API testing. I fail to see where the advantage would be.

Is there any reason for me, a single developer, to use Postman? Or perhaps there is only an advantage for shared projects?

getting and using TestNG @test annotation parameters

I want to use testName, suiteName & description that are specified in @Test annotation in @BeforeMethod, but could not find anywhere a way to do so. I would appreciate your help.

@BeforeMethod
public void beforeMethod() throws Exception {
    initTest(testName here, suiteName here, description here);
}

@Test(testName = "Test Name", suiteName = "Suite Name", description = "Description")
public void Test01() throws Exception {
    //test code
}

How to schedule a test using JMeter for feauture test?

I'm using JMeter for feature testing and i want to schedule a Jmeter test. Please help me! Thanks.

Selenium with Python- Message: 'operadriver' executable needs to be in PATH

for checking whether a website loads in opera using selenium with python, using the code

def test_opera_compatability(self):
    driver = webdriver.Opera("functional_tests/operadriver")
    driver.get("https://www.google.com/")
    driver.quit()

It returns the following error

Message: 'operadriver' executable needs to be in PATH.

similar code for chrome works as intended, which is like this

def test_chrome_compatability(self):
    driver = webdriver.Chrome('functional_tests/chromedriver')
    driver.get("https://www.google.com/")
    driver.quit()

testing instanceof NavigationStart with jest

I know it's probably very stupid question. But if someone can help me with that it would be amazing. I was trying to test this part of code :

    public subject = new Subject<any>();
    public keepAfterNavigationChange = false;

    constructor(public router: Router) {
        // clear alert message on route change
        router.events.subscribe(event => {
            if (event instanceof NavigationStart) {
                if (this.keepAfterNavigationChange) {
                    // only keep for a single location change
                    this.keepAfterNavigationChange = false;
                } else {
                    // clear alert
                    this.subject.next();
                }
            }
        });
    }

But coverage is always showing that this function is not covered:

event => {
            if (event instanceof NavigationStart) {
                if (this.keepAfterNavigationChange) {
                    // only keep for a single location change
                    this.keepAfterNavigationChange = false;
                } else {
                    // clear alert
                    this.subject.next();
                }
            }

How exactly should I test it?

mercredi 27 novembre 2019

Jest + React + Rails: how to create objects in database for testing?

I have a Rails app with React front. I would like to use Jest for feature testing because the rest of people working on this project specialize in React. Front and back contact with each other through API.

We have a page which lists a number of phone codes. These codes exist in database. I want to test it if index page is actually rendering phone codes.

If I was using RSpec + Capybara, I would write a feature test like this:

before do
  phone_code = PhoneCode.create(prefix: 495, description: 'Moscow')
  another_code = PhoneCode.create(prefix: 8692, description: 'Sevastopol')
end

describe 'index' do
  it 'lists all phone codes in database' do
    visit 'api/v1/phone_codes'
    expect(page.body).to include(phone_code.description)
    expect(page.body).to include(another_code.description)
  end
end

However, I don't know how to create things in database in React and its test frameworks; except, maybe, through creation form, which doesn't exist in our app.

So, what should I do? Is there a way to create object from Jest? Or I should just make everybody use Capybara?

Getting error : org.openqa.selenium.UnhandledAlertException: unexpected alert open: even giving valid input

I am giving input through xls file, for valid input I am getting error message as unexpected alert open. I have handled that popup already using if condition still I am getting error. I have two method using first I am opening my desired web application and in using 2nd method doing operation it

androidTestUtil in a library

I have a library that contains test-butler. So in the library I do.

implementation 'com.linkedin.testbutler:test-butler-library:2.1.0'

and in the app consuming this library I do:

androidTestImplementation 'com.github.mylib...'

now I am wondering how I could do the same for androidTestUtil - would like that the dependency:

androidTestUtil 'com.linkedin.testbutler:test-butler-app:2.1.0'

comes via the lib and does not need to be stated in the app.

How to know which window is open on desktop using java

I am Testing my window application using winium. In between sometime after expiring session it is asking for login and winium program does not know that login box has open. so it stops the testing.

Is there any way in winium or java that i can get to kow which window is open on desktop.

Thanks in Advance

How to set cross env variables to start tests berofe commit?

I have configured to run tests before commit. But it only works in Ubuntu.

Here what I have now:

 "scripts": {
    "start": "react-scripts start",
    "build": "react-scripts build",
    "test": "react-scripts test",
    "eject": "react-scripts eject",
    "test:all": "CI=true react-scripts test"
  },
  "husky": {
    "hooks": {
      "pre-commit": "npm run test:all"
    }
  },

How to set cross env variables to run them in any operating system?

How to call test helper function from within integration tests?

I'm trying to figure out how to best organize my tests in Rust, and I'm running into the following problem. I have a test utility (test_util) that I define in a module and I would like to be able to use it from my unit tests as well as from my integration tests.

Definition of test_util in src/lib.rs:

#[cfg(test)]
pub mod test_util {
    pub fn test_helper() {}
}

I can access my helper function from my unit tests in another module, src/some_module.rs:

#[cfg(test)]
pub mod test {
    use crate::test_util::test_helper;

    #[test]
    fn test_test_helper() {
        test_helper();
    }
}

However, when I try to use the utility from my integration test, as in tests/integration_test.rs:

use my_project::test_util::test_helper;

#[test]
fn integration_test_test_helper() {
    test_helper();
}

I get the following compiler message:

8 | use my_project::test_util::test_helper;
  |                 ^^^^^^^^^ could not find `test_util` in `my_project`

Is there a good reason why it is not allowed to access test code from the project from within an integration test belonging to that same project? I get it that integration tests can only access the public parts of the code, but I think it would make sense to also allow access to the public parts of the unit test code. What would be a work around for this?

React Test Renderer Act Function

I’ve gone through all the documentation I can find. What does the react test renderer act() function actually do? They give short justifications here and there, but I mean at a more technical level.

Ty!

https://reactjs.org/docs/test-renderer.html#testrendereract

Need a thread safe java test library (with mocking capabilities)

We use mockito as mocking library for our unit tests. As it turns out mockito mocks cannot be accessed concurrently.

According to their docs:

However Mockito is only thread-safe in healthy tests, that is tests without multiple threads stubbing/verifying a shared mock. Stubbing or verification of a shared mock from different threads is NOT the proper way of testing because it will always lead to intermittent behavior.

So, now I'm looking for some thread-safe mockito alternative to unit test concurrent code. Couldn't google anything relevant.

Any suggestions are welcome

Thank you

How to mock an axios api call to return different values based on input?

Say one has a function that, among other tasks, makes a few api calls using axios. Is there a way, when testing this function, to mock all the axios api calls and specify return values from the calls depending on the input. For example, say the function you want to test is this:

function someFunction (a, b, c) {
    const apiReturnA = axiosApiCall(a)
    const returnB = b + 1
    const apiReturnC = axiosApiCall(c)
    return [apiReturnA, returnB, apiReturnC]
}

I'd like to test someFunction and specify that, every time axiosApiCall gets called, don't execute the function, simply return a value based on the input to this function. How can one do this?

Start a mysql server for tests

I am currently developing an aws lambda function with serverless framework in node10. My lambda executes queries in an mysql rds database with mysql client (https://www.npmjs.com/package/mysql).

I am writing tests and I need to not mock database result. I would like the following flow:

  • Before test: start a mysql server, configure the client with correct host and port, create schema, tables, ...
  • During test: insert data and test queries
  • At the end of test: destroy database

This seems like a very standard use case to me but I can't find anything to simply start a server. All I find is client to connect and query db. Is this possible to do that with node ?

Many thanks!

angular test get child component

<app-component>
<app-image [src]="example.svg"></app-image>
</app-component>

app.component.ts

<div>
<p>Some text</p>
<ng-content></ng-content>
</div>

How can I get app-image element or src value?

How to search for email address which sends back automated response?

I would like to test the functionality of a mailbox. I need an email address to which I send an email and then I can verify the response. So it should be an email address which responses automatically with an email in like 1 minute. I was trying to google it many ways but cannot find any, the results are always only for "how to create automatic response" but that is not what I need. Anyone has any idea how to find such a provider? Or you know one? Thanks in advance

ListenableWorker test is freezing when running All tests in Android

I'm having a strange behaviour testing a ListenableWorker. Following the Android guide I place this in my test code:

    ListenableWorker testTrackerWorker = TestListenableWorkerBuilder.from(context, TrackerWorker.class).build();
    ListenableWorker.Result result = testTrackerWorker.startWork().get();
    assertThat(result, is(ListenableWorker.Result.success()));

The thing is that when I run All test for the first time it freezes when running this test. Stopping them and runnint it isolated it finishes and next times I run All tests sometimes it freezes and sometimes it finishes.

This is the code of TrackerWorker

    @SuppressLint("RestrictedApi")
public class TrackerWorker extends ListenableWorker implements APIRequestFinishListener, APIRequestListener {


    private final String TAG = "TrackerWorker";
    private SettableFuture<Result> future;
    private int pendingTasks;

    public TrackerWorker(@NonNull Context context, @NonNull WorkerParameters workerParams) {
        super(context, workerParams);
    }

    @NonNull
    @Override
    public ListenableFuture<Result> startWork() {

        Log.i(TAG, "startWork");

        future = SettableFuture.create();

        updateTracking();

        return future;
    }

    @Override
    public void onStopped() {
        super.onStopped();
        Log.i(TAG, "onStopped");
    }

    public void updateTracking() {

        pendingTasks = 4;

        // Update server info
        FetchServerInfoTask fetchServerInfoTask = new FetchServerInfoTask(getApplicationContext());
        fetchServerInfoTask.setAPIRequestFinishListener(this, "FetchServerInfoTask");
        fetchServerInfoTask.execute();
        Log.i(TAG, "updateTracking: FetchServerInfoTask executed");

        // check for updated courses
        // should only do this once a day or so....
        SharedPreferences prefs = MobileLearning.getPrefs(getApplicationContext());
        long lastRun = prefs.getLong("lastCourseUpdateCheck", 0);
        long now = System.currentTimeMillis() / 1000;
        if ((lastRun + (TimeUnit.HOURS.toSeconds(12))) < now) {
            APIUserRequestTask task = new APIUserRequestTask(getApplicationContext());
            Payload p = new Payload(MobileLearning.SERVER_COURSES_PATH);
            task.setAPIRequestListener(this);
            task.setAPIRequestFinishListener(this, "APIUserRequestTask");
            task.execute(p);

            prefs.edit().putLong("lastCourseUpdateCheck", now).apply();
        } else {
            pendingTasks--;
        } 

        // send activity trackers
        Log.d(TAG, "Submitting trackers multiple task");
        SubmitTrackerMultipleTask omSubmitTrackerMultipleTask = new SubmitTrackerMultipleTask(getApplicationContext());
        omSubmitTrackerMultipleTask.setAPIRequestFinishListener(this, "SubmitTrackerMultipleTask");
        omSubmitTrackerMultipleTask.execute();


        // send quiz results
        Log.d(TAG, "Submitting quiz task");
        DbHelper db = DbHelper.getInstance(getApplicationContext());
        List<QuizAttempt> unsent = db.getUnsentQuizAttempts();

        if (unsent.size() > 0) {
            Payload p2 = new Payload(unsent);
            SubmitQuizAttemptsTask omSubmitQuizAttemptsTask = new SubmitQuizAttemptsTask(getApplicationContext());
            omSubmitQuizAttemptsTask.setAPIRequestFinishListener(this, "SubmitQuizAttemptsTask");
            omSubmitQuizAttemptsTask.execute(p2);
        } else {
            pendingTasks--;
        } 


        // Attention! if more tasks are added, remember to update pendingTasks method variable
    }

    @Override
    public void apiRequestComplete(Payload response) {
        boolean updateAvailable = false;
        try {

            JSONObject json = new JSONObject(response.getResultResponse());
            Log.d(TAG, json.toString(4));
            DbHelper db = DbHelper.getInstance(getApplicationContext());
            for (int i = 0; i < (json.getJSONArray("courses").length()); i++) {
                JSONObject json_obj = (JSONObject) json.getJSONArray("courses").get(i);
                String shortName = json_obj.getString("shortname");
                Double version = json_obj.getDouble("version");

                if (db.toUpdate(shortName, version)) {
                    updateAvailable = true;
                }
                if (json_obj.has("schedule")) {
                    Double scheduleVersion = json_obj.getDouble("schedule");
                    if (db.toUpdateSchedule(shortName, scheduleVersion)) {
                        updateAvailable = true;
                    }
                }
            }

        } catch (JSONException e) {
            Mint.logException(e);
            Log.d(TAG, "JSON error: ", e);
        }

        if (updateAvailable) {
            Intent resultIntent = new Intent(getApplicationContext(), DownloadActivity.class);
            PendingIntent resultPendingIntent = PendingIntent.getActivity(getApplicationContext(), 0, resultIntent, PendingIntent.FLAG_UPDATE_CURRENT);

            NotificationCompat.Builder mBuilder = OppiaNotificationUtils.getBaseBuilder(getApplicationContext(), true);
            mBuilder
                    .setContentTitle(getString(R.string.notification_course_update_title))
                    .setContentText(getString(R.string.notification_course_update_text))
                    .setContentIntent(resultPendingIntent);
            int mId = 001;

            OppiaNotificationUtils.sendNotification(getApplicationContext(), mId, mBuilder.build());
        }
    }

    private String getString(int stringId) {
        return getApplicationContext().getString(stringId);
    }


    @Override
    public void onRequestFinish(String idRequest) {

        pendingTasks--;

        Log.i(TAG, "onRequestFinish: pendingTasks: " + pendingTasks);

        if (pendingTasks == 0) {
            future.set(Result.success());
        }
    }

    @Override
    public void apiKeyInvalidated() {
        SessionManager.logoutCurrentUser(getApplicationContext());
    }

}

Run tests from terminal android studio

I have a tests that lies in the current project folder(i don't need to run all tests in project). How I can run these tests from the terminal. I use Gradle.

How to set attributes using vue-test-utils on shallowMount?

I want to test methods in my Vue component, but I need to mock some attributes data for that, that I will access later on as this.$attrs.pattern etc... My current code is:

let wrapper;

beforeEach(() => {
   wrapper = shallowMount(Input);
});

afterEach(() => {
   wrapper.destroy();
});

it('should pass pattern check', () => {
   // I want to setup pattern attribute here
   expect(wrapper.vm.passPatternCheck).toBeTruthy();
});

I was expecting there to be something like wrapper.setProps(), but can't find it yet.

How to run a setup file only once in Jest?

When testing with Jest I need to setup the entire test suite, however the setup file executes for every test files, as defined in the configuration. Is it possible to have a setup file being executed only once?

testing controllers requiring login

I'm learning PHPUnit and I want to testing page witch requiring login (like change your password, add a new article, etc). Normal test look like this

$response = $this->get('/');

 $response->assertStatus(200);

But if page requires authentication (is intended for logged in users), it won't work. How I can solve my problem?

How to add test method's comment sections from IntelliJ automatically?

I am creating test classes from the IntelliJ automatically like following

CreateTest

This gives me the test class as following in the appropriate module accordingly:

public class MyClassTest {
    @Test
    public void myMethod() {
    }
}

What I am looking for is that

Does IntelliJ can automatically insert Given When Then comment sections into test methods?

I am searching for something like following:

public class MyClassTest {
    @Test
    public void myMethod() {
        // Given
        // When
        // Then
    }
}

These sections are useful for the reader coming after some times passed but usually missed while writing test methods. I am looking for a solution to add this behavior to IDE.

Testing Vuetify (Vue.js) - Multiple calls on mount throw error

I am currently experiencing a behaviour when testing my Vue Application (specifically when vuetify is included). I am using Jest as Test Runner but experienced the same behaviour with mocha.

The first thing to notice is that the problem only occurs with mount from the @vue/test-utils and not shallowMount. Also it only occurs if you use mount twice (I guess the reason is the pollution of the Vue Object but more on that later).

Now my component is manly just a wrapper around a basic v-data-table with the property value bound to its items and some custom slots for checkboxes instead of text.

Now the problem. First this is what the first variant of my test looks like (it's basically how it's recommended by vuetify. take a look here. As the test itsself doesn't really matter I'll just expect true to be true here

import Vue from 'vue';
import Vuetify from 'vuetify';
import { mount, createLocalVue, shallowMount } from '@vue/test-utils';

import  PermissionTable from '@/components/PermissionTable.vue';
import { expect } from 'chai';

const localVue = createLocalVue();

// Vue.use is not in the example but leaving it will cause the error that 
// the data table is not registered
Vue.use(Vuetify);

describe('Permissiontable.vue', () => {
  let vuetify;
  let tableItems;

  beforeEach(() => {
    tableItems = [];
    vuetify = new Vuetify();
  });


  it('will test the first thing', async () => {
    const wrapper = mount(PermissionTable, {
      localVue,
      vuetify,
      propsData: {
        value: tableItems
      }
    });

    expect(true).to.be(true);
  });


  it('will test the second thing', async () => {
    const wrapper = mount(PermissionTable, {
      localVue,
      vuetify,
      propsData: {
        value: tableItems
      }
    });

    expect(true).to.be(true);
  });
});

Now as already commented without using Vue.use(Vuetify) I'll get the error that the component v-data-table is not registered. With it I'm left with the following behaviour

  1. Test the first thing runs as expected and succeeds
  2. The the second thing fails the following Error

Type Error: Cannot read property '$scopedSlots' of undefined

and fails at mount(....). To make the behaviour even weirder, if I debug and stop at this line, run the mount manually in the debug console it fails as well on the first time with the same error. If I run it again it works.

Now I am sure that functions behave the same way if they get the same input. So the Input to the mount must be altered by the first call. My guess is that the Vue class gets polluted somehow. So if I look at the documentation for localVue this utility is made to prevent pollution of the global Vue class. So I altered my code to

import Vue from 'vue';
import Vuetify from 'vuetify';
import { mount, createLocalVue, shallowMount } from '@vue/test-utils';

import  PermissionTable from '@/components/PermissionTable.vue';
import { expect } from 'chai';

describe('Permissiontable.vue', () => {
  let vuetify;
  let tableItems;
  let localVue;

  beforeEach(() => {
    tableItems = [];
    localVue = createLocalVue();
    vuetify = new Vuetify();
    localVue.use(vuetify);
  });


  it('will test the first thing', async () => {
    const wrapper = mount(PermissionTable, {
      localVue,
      vuetify,
      propsData: {
        value: tableItems
      }
    });

    expect(true).to.be(true);
  });


  it('will test the second thing', async () => {
    const wrapper = mount(PermissionTable, {
      localVue,
      vuetify,
      propsData: {
        value: tableItems
      }
    });

    expect(true).to.be(true);
  });
});

So I create a new Instance of localVue and vuetify for every test. and make localVue use vuetify. Now this brings me back to the error

[Vue warn]: Unknown custom element: - did you register the component correctly? For recursive components, make sure to provide the "name" option.

I also experimented with various alterations of injecting vuetify (instantiated) or Vuetify. using the global Vue.use etc. At the end I'm always left with one of those two behaviours.

Now the workouround seems to be to write each test in a single file which works but I think is really bad practice and I want to understand what exactly happens here.

How to test net socket nodejs

There is code needed to be covered by tests. The main functionality is based on sockets TCP based from net library. The communication is based on streams. What are ways to test the code? Is it better to test streams only? How can I fake them?

How to run all tests refferenced by a method in C#?

I updated some functionality inside a method and I want to make sure that all tests that are using this method work after the update ? Is there a way to do this ?

check the image

Puppeteer Chromium, disable "Anonymize local IPs exposed by WebRTC"

I'm trying to run puppeteer tests using Chromium against a local server on http://localhost:3080/.

The page is a streaming video over webRTC, but because it's on localhost I'd like the Anonymize local IPs exposed by WebRTC from chrome://flags to be set to disabled when launching Chromium. (this would be purely for local testing) I pass puppeteer "args" like so:

const page = await puppeteer.launch({args: ["...", "..."]});

Just can't seem to be able to find the correct flag to pass to args even after going through this list (really slow to load) Would anyone have any ideas as to how I can get around this issue or what arg I might be able to pass to Chromium?

React: How to test component's input which uses ref?

I have a component which is used for searching subjects:

class Search extends React.Component {
    constructor(props) {
        super(props);

        this.subjectNameInput = React.createRef();
        this.searchSubjectsByName = this.searchSubjectsByName.bind(this);
    }

    searchSubjectsByName(e) {
        console.log("INPUT", this.subjectNameInput.current.value); <--- empty value
        console.log("INPUT", e.target.value); <--- correct value
        this.props.searchSubjectsByName(this.subjectNameInput.current.value);
    }

    render() {
        return (
            <div className="input-group mb-3">
                <div className="input-group-prepend">
                    <span className="input-group-text" id="basic-addon1">Search</span>
                </div>
                <input onChange={(e) => this.searchSubjectsByName(e)} ref={this.subjectNameInput} type="text" className="form-control" placeholder="Subject name" aria-label="subject"
                       aria-describedby="basic-addon1"/>
            </div>
        )
    }
}

const mapDispatchToProps = (dispatch) => ({
    searchSubjectsByName(pattern) {
        dispatch(searchSubjectsByName(pattern))
    }
});

const SearchContainer = connect(null, mapDispatchToProps)(Search);

export default SearchContainer;

And i have some tests for it:

describe("Search component spec", () => {
    const middlewares = [thunk];
    const mockStore = configureStore(middlewares);

    ...

    it('emit SEARCH_SUBJECTS_BY_NAME event', () => {
        const expectedActions = [
            {type: types.SEARCH_SUBJECTS_BY_NAME, pattern: 'sample'},
        ];

        const store = mockStore();
        const wrapper = mount(<Provider store={store}><SearchContainer/></Provider>);
        wrapper.find('input').simulate('change', {target: {value: 'sample'}});
        expect(store.getActions()).toEqual(expectedActions)
    });
});

When action change is simulated i get an empty value from this.subjectNameInput.current.value, but if i try to get value not from ref, but from event's target e.target.value then i get the correct value.

How to correctly write tests for components which use refs for inputs?

Program crashed in the last step in test Tensorflow-gpu 2.0.0

When using Tensorflow 2.0.0 and split dataset into train-set and test-set. The training and testing code is as following:

for epoch in range(params.num_epochs):
    for step, (x_batch_train, y_batch_train) in enumerate(train_dist_dataset):
        DO TRAINING HERE....
    if epoch % params.num_epoch_record == 0:
        for step, (x_test, y_test) in enumerate(test_dist_dataset):
            DO TESTing HERE....
        checkpoint.step.assign_add(1)
        save_path = manager.save()
        logger.info("Saved checkpoint {}".format(save_path))

However, when after the last test data in enumerate(test_dist_dataset) the program will crash and shows up:

F .\tensorflow/core/kernels/conv_2d_gpu.h:964] Non-OK-status: GpuLaunchKernel( SwapDimension1And2InTensor3UsingTiles<T, kNumThreads, kTileSize, kTileSize, conjugate>, total_tiles_count, kNumThreads, 0, d.stream(), input, input_dims, output) status: Internal: invalid configuration argument

So, how it occurs and how to solve it?

How to load images via file open dialog in TestCafe

Background info:

Our application depends on different sets of medical images (due to medical environment we can't share our code). On each image, a set of test cases is run. Each time we need to manually select a new image, which of course we want to automate. The loading of images is fairly complex, but our devs created a button that covers all that. This button opens a File open dialog.

My Question:

How can I open a file dialog window and import new images with TestCafe?

We've tried:

  • t.setFilesToUpload (which not seems to work because it needs a selector where the images gets uploaded to?)
  • List item clicking the button (it does not open the dialog window)

Why is it so rare to see a C program crash?

Background

Hello!

I'm a professional C++/Python programmer and lately I've been writing up a small project in C. I have a great interest in Mathematics and Computer Science despite my lack of formal education, in particular I read a lot about testing, formal proofs of correctness of programs and different coding methodologies, like Agile or TTD.

Question

Considering the amount of programs written in pure C we use everyday, especially on Linux (the kernel itself is almost completely written in C), why aren't fatal errors a common occurrence when using a computer?

Explanation

I know that for some people this question might seem weird, so here is an explanation why I would expect to see code written in C to fail more often than it does.

  • Assertion 1 no matter how good the tests are, testing can only prove presence of bugs, not their absence.
  • Assertion 2 every project has a constant rate of bugs per line of code, including the code in automated tests as well as specification.
  • Assertion 3 unlike in many (not all) modern languages, C specification allows for a code to be incorrect but compileable and requires in any non-trivial application to operate directly on memory addresses, introducing a class of errors that are inconsistent in their behavior, hard to trace back, often caused not by bad logic but by bad values supplied to the program/function and (most important for this question) often causing termination of a program by the system, for example in case of memory access violation.
  • Conclusion 1 all programs written in C, no matter how well maintained, will still have undetected errors in them, either not yet detected or introduced with an update.
  • Conclusion 2 if it is true, that 1) programs written in C almost always will have hidden bugs in them and 2) bugs that could potentially cause an unexpected termination of a C program are the hardest to find, thus follows that, in theory, unexpected termination of a C program should be as common of an occurrence as encountering any other error.

If there is eqaution as C=A/B and requirements is C>=0 then can you write testcase on this scenario?

If there is eqaution as C=A/B and requirements is C>=0 then can you write testcase on this scenario?

mardi 26 novembre 2019

How to test the intrinsic size of a render object in Flutter

I have previously asked about testing the size of a widget in Flutter.

However, now I am trying to test the intrinsic size of the underlying render object.

I tried to do this

testWidgets('MongolRichText has correct min instrinsic width',
    (WidgetTester tester) async {
  const String myString = 'A string';
  await tester.pumpWidget(
    Center(child: MongolText(myString)),
  );

  MongolRenderParagraph text = tester.firstRenderObject(find.byType(MongolRenderParagraph));
  expect(text, isNotNull);
  expect(text.getMinIntrinsicHeight(double.infinity), 100);
});

where MongolText creates a MongolRenderParagraph (similarly to how Text ends up creating a Paragraph). However, I get the following error:

══╡ EXCEPTION CAUGHT BY FLUTTER TEST FRAMEWORK ╞════════════
The following StateError was thrown running a test:
Bad state: No element

How do I get the underlying render object to run tests on it?

I found the answer so I am adding this as a self answer Q&A. My answer is below.

How to integrate swagger information submission into automated user interface testing using Katalon Studio?

Good night people,

I would like to create Katalon interface tests on a system under development, but I am having difficulty. The system involves an icon of a car that needs to travel through a stream, but to get it out of some processes (such as arrival, middle and end), it receives Swagger parameters that simulate or PLC bus, like closing a gate to let the car out, for example. Can I create an automated test that sends this information from swagger by importing Swagger's JSON in Katalon? Is there any other example besides PetStore?

Thanks

Designing a unit-test framework for writing custom tests in CLIPS for CLIPS rules, using a multi-file setup

I'd like to make a unit-test like framework that allows me to write custom tests for individual rules. I'd like each test to be in it's own file, i.e. test_R1.clp would be the test file for rule R1. Each test should be able to load it's own facts file. I've tried many variations of the following, including using a different defmodule for each file. Is what I'm trying to do even possible in CLIPS? If so, what else is needed to make this work?

I'd like to run my tests via:

$CLIPSDOS64.exe -f2 .\test_all.clp

With the current example, the error I get is [EXPRNPSR3] Missing function declaration for setup-tests.

I've gotten a single test to work correctly using a unique defmodule for each file (i.e. UNITTEST for the testing framework and R1 for the test_R1 file). However, I would still get errors because of the automatic switching between focus statements when files are loaded, or when functions are defined in other files. I've looked at the basic and advanced CLIPS programming guides, but if I've missed something there, please let me know.

Other specific questions:

  1. Since some tests may load facts that overwrite existing facts, how do I prevent getting errors from redefining existing facts? Do I need to do a (clear) in between running each test?

TestingFramework.clp:

;;; File: TestingFramework.clp

(defglobal ?*tests-counter* = 0)
(defglobal ?*all-tests-passed* = TRUE)
(defglobal ?*failed-tests-counter* = 0)

(deftemplate test_to_run
   (slot testid)
   (slot testname)
   (slot testsetupfunc)
   (slot testcheckfunc))

(deffunction test-check (?test-name ?test-condition)
   (if (eval ?test-condition)
       then (printout t "SUCCESS: Test " ?test-name crlf)
            (printout test_results_file "SUCCESS: Test " ?test-name crlf)
            (return TRUE)
       else (printout t "FAILURE: Test " ?test-name crlf)
            (printout test_results_file "FAILURE: Test " ?test-name crlf)
            (return FALSE)))

(deffunction setup_tests ()
    (open "test_summary_results.txt" test_results_file "w"))

(deffunction finish_tests ()
    (close test_results_file))

(deffunction add_test (?test-name ?test-setup-func ?test-check-func)
    (bind ?*tests-counter* (+ 1 ?*tests-counter*))
    (assert (test_to_run (testid ?*tests-counter*)
                         (testname ?test-name)
                         (testsetupfunc ?test-setup-func)
                         (testcheckfunc ?test-check-func))))

(deffunction run_all_tests ()
    (printout t "About to run " ?*tests-counter* " test(s):" crlf)
    (do-for-all-facts ((?ttr_fact test_to_run)) TRUE
        (funcall (fact-slot-value ?ttr_fact testsetupfunc))
        (if (funcall (fact-slot-value ?ttr_fact testcheckfunc))
            then (printout t "    SUCCESS" crlf)
            else (printout t "    FAILURE" crlf)
                 (bind ?*failed-tests-counter* (+ 1 ?*failed-tests-counter*))
                 (bind ?*all-tests-passed* FALSE)))
    (if ?*all-tests-passed*
        then (printout t "All " ?*tests-counter* " tests passed successfully." crlf)
        else (printout t ?*failed-tests-counter* "/" ?*tests-counter* " tests failed." crlf)))

tests\test_R1.clp:

;;; File: test_R1.clp
;;; Tests for Rule 1

(deffunction R1_TEST_1_SETUP ()
    (load* "FluidSystem_facts_demo.clp")
    (load* "FluidSystem_rules_demo.clp")
    (reset))

(deffunction R1_TEST_1 ()
    (send [JacketWaterInletTempReading] put-hasValue 35.0)
    (send [JacketWaterInletTempReading] put-hasValueDefined DEFINED)
    (send [JacketWaterOutletTempReading] put-hasValue 37.0)
    (send [JacketWaterOutletTempReading] put-hasValueDefined DEFINED)
    (run)
    (return (member$ [DissimilarHighTempFlowRate] (send [CounterFlowHeatExchanger] get-hasIssue))))

test_all.clp:

;;; File: test_all.clp
;;; Run tests via:
;;; CLIPSDOS64.exe -f2 .\test_all.clp

(load* "TestingFramework.clp")
(setup-tests)

;;; Test R1
(load* "tests\\test_R1.clp")
(add_test (test_to_run "R1_TEST_1" R1_TEST_1_SETUP R1_TEST_1))
(clear)  ;; unsure if this is needed

;;; ... more tests to follow

(run_all_tests)

(finish_tests)

Can I write tests for c# sealed class with protected methods using robotframwork?

I can't modify the original class. if i pass the class name to the python Ctor i get: "TypeError: cannot derive from MyClass because it is sealed". and if I don't pass it as Ctor argument and use its methods directly i get: "TypeError: cannot access protected member ExecuteTest without a python subclass of MyClass"

to be clear, I'm testing with IronPython and robot framework. and tests are for c# classes.

Kotlintest extensions providing information back to the test

JUnit 5 has a neat extensions functionality which is not compatible with kotlintest even if it runs on JUnit framework. While the simple use cases in which we just need to log something can be handled by the TestListener, we cannot handle more advanced cases. In particular, how to interact with the extension? Ideally, I would like to get a hold of the extension so I could query it.

In JUnit5 it would be (one of the options anyway)

@ExtendWith(MyExtension.class)
class Something() {

 @MyAnnotation
 MyType myType;

 @Test
 void doSomething() {
    myType.doSomething();
 }

}

In JUnit4 it would be even simpler

@Rule
MyRule myRule;

@Test
void fun() {
  myRule.something();
}

Of course, there is a SpringExtension but it does the reflective instantiation of the class. Is there any way to do it easier?

Testing using Quick iOS: fail function isn't executing and it goes to crash

I am creating a huge test using Quick testing framework for iOS. Some tasks are asynchronous so I am calling waitUntil function with set timeout.

waitUntil(timeout: TimeInterval(self.timeout)) { done in
        // something to do
        done()
}

Sometimes test goes wrong which it caused the field wasn't set. However is always executed code which checked if field is set, so it should execute fail function when field isn't set and end of test without crash

func getField() -> Field {
    let field = blm?.su.first

    if field == nil {
        fail("No field")
    }
    return field!
} 

But I don't know what is problem that code above isn't cancelled when field is checked as nil, and during return statement - fatal unwrapping and app crash

Please help me, what is wrong here

Android Instrumentation testing failing with RuntimeException

I have a legacy android project with no instrumentation tests. To start writing UI tests I created a androidTest folder and my first UI test there. Pretty simple demo test.

@RunWith(AndroidJUnit4.class)
public class FirstTest {

    @Test
    public void runFirstTest(){
        Context appContext = InstrumentationRegistry.getTargetContext();
        assertEquals("my-package-name", appContext.getPackageName());
    }
}

But for some reason everytime I run this test I get java.lang.RuntimeException: com.android.build.api.transform.TransformException (Crash logs below).

org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':app:transformClassesWithDexBuilderForbankingDevDebugAndroidTest'.
    at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:103)
    at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:73)
    at org.gradle.api.internal.tasks.execution.OutputDirectoryCreatingTaskExecuter.execute(OutputDirectoryCreatingTaskExecuter.java:51)
    at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:59)
    at org.gradle.api.internal.tasks.execution.ResolveTaskOutputCachingStateExecuter.execute(ResolveTaskOutputCachingStateExecuter.java:54)
    at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:59)
    at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:101)
    at org.gradle.api.internal.tasks.execution.FinalizeInputFilePropertiesTaskExecuter.execute(FinalizeInputFilePropertiesTaskExecuter.java:44)
    at org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:91)
    at org.gradle.api.internal.tasks.execution.ResolveTaskArtifactStateTaskExecuter.execute(ResolveTaskArtifactStateTaskExecuter.java:62)
    at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:59)
    at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:54)
    at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43)
    at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:34)
    at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker$1.run(DefaultTaskGraphExecuter.java:256)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110)
    at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:249)
    at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:238)
    at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.processTask(DefaultTaskPlanExecutor.java:123)
    at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.access$200(DefaultTaskPlanExecutor.java:79)
    at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:104)
    at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:98)
    at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.execute(DefaultTaskExecutionPlan.java:663)
    at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.executeWithTask(DefaultTaskExecutionPlan.java:597)
    at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.run(DefaultTaskPlanExecutor.java:98)
    at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
    at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: com.android.build.api.transform.TransformException: java.lang.RuntimeException: java.lang.RuntimeException
    at com.android.builder.profile.Recorder$Block.handleException(Recorder.java:55)
    at com.android.builder.profile.ThreadRecorder.record(ThreadRecorder.java:104)
    at com.android.build.gradle.internal.pipeline.TransformTask.transform(TransformTask.java:230)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:73)
    at org.gradle.api.internal.project.taskfactory.IncrementalTaskAction.doExecute(IncrementalTaskAction.java:50)
    at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:39)
    at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:26)
    at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$1.run(ExecuteActionsTaskExecuter.java:124)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199)
    at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110)
    at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:113)
    at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:95)
    ... 33 more
Caused by: com.android.build.api.transform.TransformException: java.lang.RuntimeException: java.lang.RuntimeException
    at com.android.build.gradle.internal.transforms.DexArchiveBuilderTransform.transform(DexArchiveBuilderTransform.java:427)
    at com.android.build.gradle.internal.pipeline.TransformTask$2.call(TransformTask.java:239)
    at com.android.build.gradle.internal.pipeline.TransformTask$2.call(TransformTask.java:235)
    at com.android.builder.profile.ThreadRecorder.record(ThreadRecorder.java:102)
    ... 49 more
Caused by: java.lang.RuntimeException: java.lang.RuntimeException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:593)
    at java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:677)
    at java.util.concurrent.ForkJoinTask.join(ForkJoinTask.java:720)
    at com.android.ide.common.internal.WaitableExecutor.waitForTasksWithQuickFail(WaitableExecutor.java:146)
    at com.android.build.gradle.internal.transforms.DesugarIncrementalTransformHelper.getInitalGraphData(DesugarIncrementalTransformHelper.java:162)

bankingDevDebug is my build variant. Since it is huge project we have multiple build variants

How to mock promise - await causing jest test to fail

I am testing a functional component that has a submit button that makes an async call to an api. The async call is located within a custom hook. As per standard testing practices, I have mocked the hook, so that my mock will be called instead of the actual async api:

someComponent.test.js

jest.mock("../../../CustomHooks/user", () => ({
  useUser: () => ({
    error: null,
    loading: false,
    forgotPassword: <SOMETHING HERE>

  })
}));

I know that my forgotPassword function is called because when I change it to forgotPassword: "", I get an error in my test stating that forgotPassword is not a function.

A very simple representation of the function that is called when my submit button is clicked is this:

someComponent.js

const submit = async () => {
    await forgotPassword(emailValue);
    setState(prevState => {
      return {
        ...prevState,
        content: "code"
      };
    });
}

NOTE: My call to the async function await forgotPassword... is wrapped in a try/catch block in my code, but I have left this out for clarity.

In production, when the submit button is pressed, the async call occurs, and then the state should be switched, thus rendering some other components. My test looks to see if these components have been rendered (I am using react testing library for this).

The problem that I am having is that no matter what I place in the placeholder of the first code block, my test will always fail as the setState block is never reached. If I remove the await statement, then the setState block is hit and the component that I want to appear is there as the state has changed. However, obviously this will not work as intended outside of the test as the actual call is asynchronous. Here are some of the approaches that I have tried that do not work:

DOESN'T WORK

forgotPassword: () => {
      return Promise.resolve({ data: {} });
    }
DOESN'T WORK

forgotPassword: jest.fn(() => {
      return Promise.resolve();
    })
DOESN'T WORK

forgotPassword: jest.fn(email => {
      return new Promise((resolve, reject) => {
        if (email) {
          resolve(email);
        } else {
          reject("Error");
        }
      });
    }),

As I have said already, if I remove the await statement, then the state changes and the component appears, and hence the test passes. However, for obvious reasons, this is not what I want.

Extra Info

Here is a simplified version of my test:

it("changes state/content from email to code when submit clicked", () => {
  const { getByTestId, getByText, debug } = render(<RENDER THE COMPONENT>);

  const submitButton = getByTestId("fpwSubmitButton");
  expect(submitButton).toBeInTheDocument();

  const emailInput = getByTestId("fpwEmailInput");

  fireEvent.change(emailInput, {
    target: { value: "testemail@testemail.com" }
  });

  fireEvent.click(submitButton);

  debug();

  THE STATEMENTS BELOW ARE WHERE IT FAILS AS THE STATE DOESN'T CHANGE WHEN AWAIT IS PRESENT

  const codeInput = getByTestId("CodeInput");
  expect(codeInput).toBeInTheDocument();
});

What would be local "docker test" command?

Docker official documentation standardizes a way to test images with the sut service in a docker-compose.test.yml file: https://docs.docker.com/docker-hub/builds/automated-testing/

Yet, the documentation does not provide anyway to run those tests on another environment than the centralized Docker Hub.

At the same time, another official documentation entry explains that it is possible to override the test command with hooks. Yet, there is no documentation for this elusive test command, nor any example on how to properly override it.

  • Is there such thing as an actual test command?
  • If not, how would a developer locally run the docker test following the proposed format on his local environment?
  • What would be an example test hook override?

How to check programatically if a new screen is fully loaded after a click event in android?

I am running an auto test framework based on UIAutomator. While performing test steps, i wish to know when a new screen in completely loaded after a click event has occurred e.g. a button click.

Currently, i have added 2 seconds wait for the new screen to load but it is not efficient strategy.

I want to handle this case in my test framework and remove the requirement to explicitly wait.

How to test that a job is released after a certain time when failing?

When I have a job like this

class CheckVideoStatusJob implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public $video;

    public function __construct(Video $video)
    {
        $this->video = $video;
    }

    public function handle(CheckStatusAction $action)
    {
        if (! $action->execute($this->video)) {
            $this->release(60);
        }
    }
}

How can I test: when the job has failed it will be released back to the queue after (60) seconds?

File size statistics for file sharing sites

I'm currently developing a file sharing platform and I would like to make my stress tests as realistic as possible. To do that I need to generate a lot of test files, and to do that in the most realistic manner, I need to know how the sizes of files uploaded to such a service are distributed.

The only thing I could find was this, which cover my needs to some extend but I would like some general statistics and not just data for certain document types.

If anyone has some data they are willing to share I would be very grateful.

Python test method for additional method call

I have a situation and i could not find anything online that would help. my understanding is that python testing is rigorous to ensure that if someone changes a method, the test would fail and alert the developers to go rectify the difference.

I have a method that calls 4 other methods from other classes. Patching made it real easy for me to determine if a method has been called. However, let's say someone in my team decides to add a 5th method, the test will still pass. Assuming that no other method calls should be allowed inside, is there a way to test in python to make sure no other calls are made? Refer to example.py below:

example.py:

def example():
    classA.method1()
    classB.method2()
    classC.method3()
    classD.method4()
    classE.method5()  # we do not want this method in here, test should fail if it detects a 5th or more method.

Is there anyway to cause the test case to fail if any additional methods are added?

Flutter Integration Testing- Is there any way to relaunch app while integration testing?

I am trying to do flutter integration testing. As it stays on same screen when previous test executed. I need to relaunch app for testing from first/launch screen. Is there any way to relaunch app in flutter testing?

Why is done() not being called inside .then callback? (Mocha/Chai/Node.js)

I may have misunderstood something about async tests using Mocha and Chai or I may have done something wrong. I assume that there's something that prevents done() being called inside the then() callback. Given the test below:

describe('Post', () => {
    it('should return 201 and have valid title, body, and author', (done) => {
        //mock input
        const new_post = {
            "title": "Sample title",
            "body": "This is the sample body. The author writes down something in this part.",
            "author": "User"
        }

        chai.request(app).post('/addPost').send(new_post).then((res) => {
            expect(res).to.have.status(201);
            expect(res.body.message).to.be.equal("Post created");

            expect(res.body.post.title).to.exist;
            expect(res.body.post.body).to.exist;
            expect(res.body.post.author).to.exist;
            done();
        })
        .catch(err=>{
            done(err);
        });
    });
});

I had given an invalid mock input (e.g. blank title/body/author) and the test above "correctly" shows an error and fails as I expected. Yet when given complete and valid mock input (as shown above), the test still fails and shows an Error Timeout cmd error output

//cmd output
  1) Post
       should return 201 and have valid title, body, and author:
     Error: Timeout of 5000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves.
      at listOnTimeout (internal/timers.js:531:17)
      at processTimers (internal/timers.js:475:7)


npm ERR! Test failed.  See above for more details.

I even tried calling done() after calling async function (it()) but this just always results into the test passing no matter what the input may be.

Any help or enlightenment would be appreciated as I'm just self-learning testing using Mocha and Chai.

lundi 25 novembre 2019

Jmeter test always freeze when tested server gives up

When trying to run load test in JMeter 5.1.1, tests always freeze at the end if server gives up. Test completes correctly if server does not give up. Now this is terrible because the point of test is to see at what point server gives up but as mentioned test never ends and it is necessary to kill it by hand.

Example:

  • Test running 500 threads for local server goes smoothly and it finish with tiding up message
  • Exactly the same test running 500 threads for cloud based server at some points results in error test goes to about 99 % then freezes on summary as in below example:

summary + 99 in 00:10:11 = 8.7/s Avg: 872 Min: 235 Max: 5265 Err: 23633 (100.00%) Active: 500 Started: 500 Finished: 480

and that's it you can wait forever and it will just be stuck at this point.

Tried to use different thread types without success. Next step was to change Sampler error behavior and yes changing it from Continue to Start Next Thread Loop or Stop thread helps and test is ending but then results in html look bizarre and inaccurate. I even tried to set timeout setting to 60000 ms in HTTP request Defaults but this also has given strange results.

That said can someone tell me how to successful run load test for server so that is always completes regardless of issues and is accurate> Also I did see few old question about the same issue and they did not have any answer that would be helpful. Or is there any other more reliable open source testing app that also has GUI to create tests?