mardi 31 mars 2020

How to Run Gatling script for n iteration with paricular duration

I tried to run gatling script with 5 users for Particular duration say about(10 mins) but test is completed after the user complete one iteration and not continuing with next iteration.

So please suggest sollution how to run with 5 users for multiple iteration for about 10 mins.

Could you please let me know how to write a correct test case to test check box using codecept?

input type="checkbox" name="scale_project" id="scale_project" value="1"> This is the id of the check box.

$I->checkOption('form input[name="scale_jobs"]'); This is the test case I written.But it is not working

writing tests and doc at the same time django rest framework

There is something called Spring Rest Docs in java where you can write api tests and generate doc at the same time Is there such a test driven document generation framework for django rest framework?

Puppeteer not executing anonymous function in a page

For my tests, I would like to login to this page: https://www.ebay-kleinanzeigen.de/m-einloggen.html

When first requested, this page returns a page like the following:

<html><head><meta charset="utf-8">
  <script>
    function(){/* some logic*/}();
  </script>
</head><body></body></html>

This script has functions and an anonymous function that should be executed when the browser loads the page.

In a normal browser, this function fires a xhr request (where the server will set cookies) and then reloads the same page, that thanks to the cookies will contain the login form.

To see this in action, open a private tab in your favorite browser, open the dev tools, set the networking logs to persist and visit the page. The first network requests will look like this: dev tool

Using the following Puppeteer script, the browser doesn't execute the anonymous function and gets stuck waiting for the login form, that never appears:

import puppeteer from 'puppeteer';

const main = async () => {
    try {
        const browser = await puppeteer.launch({devtools: true});
        const page = await browser.newPage();
        await page.goto('https://www.ebay-kleinanzeigen.de');
        await page.waitForSelector('#login-form', { visible: true });
        await page.screenshot({path: 'login.png', fullPage: true})
        await browser.close();
    } catch (e) {
        console.log('error',e);
    }

}

main();

I can't use page.evaluate because the content of the function is dynamically created by the server.

Is there a way to let this anonymous function get executed at page load?

What tools would be available to test the security of an Angular Web application? Am I approaching this correctly?

I'm not sure if i'm approaching this correctly, but I'm looking into testing the security of an Angular Web Application.

What tools should I be looking into to give some feedback? I have this web app live in production and uses Windows Authentication for users. I've tried a couple of tools:

https://wapiti.sourceforge.io/

https://www.zaproxy.org/

Wapiti doesn't report any errors, which I believe is wrong.. zaproxy keeps throwing 401 errors when i give it my site URL.

Any help or guidance is appreciated!

How to use Jest to test higher order function for Redux action with nested function

I am using Jest to test a Redux action function fn1. fn1 is a higher order function that wraps fn2. My test is just to make sure fn2 is called when fn1 is executed. Doesn't seems to work. I am thinking about using jest.spyOn, but it doesn't seem to make sense.

myActions.js:

export const fn1 = obj => {
  return strInput => {
    fn2(strInput, obj);
  };
};

export const fn2 = (strInput, obj) => ({name:strInput, obj});

myAction.test.js:

import {fn1, fn2} from myAction.test.js

it("should call fn2", () => {
    fn1({test:"test"})("David")
    expect(fn2).toHaveBeenCalled();
  });

Is there a way to test Django project creation with pytest/Django test suite?

I created a Django plugin system which creates some boilerplate code. It can be used in any Django project (GDAPS), and provides a few management commands.

What is the best way to test this whole suite? I mean, I can create bash scripts that setup fake Django projects which include my project, and then call all the management commands like makemigrations, migrate etc. to set it up fully, call my special commands (./manage.py initfrontend) and check if the results created the right files correctly.

Now bash scripts are not my favourite testing suite, I'd keep with python and pytest if possible. Is there a way to test things like that? How can I start here - I can't wrap my head around this. I have already written plenty of unit tests for various features of the framework, but these tests are alike integration tests.

I know I can use django.core.management.call_command() to call mgmt commands from code. But how do I set up the "fake" project - for each test an own temp directory? Thanks for your help.

Extending supertest on typescript

I'm trying to create a extension on supertest.

Using what I found in question Extending SuperTest. I have this working example on javascript:

const request = require('supertest');
const Test = request.Test;

Test.prototype.authenticate = function(user) {
  const {token, xsrfToken} = user.tokens;

  return this
   .set('Authorization', `Bearer ${token}`)
   .set('X-XSRF-TOKEN', xsrfToken);
}

And inside a test block I can use:

request(app)
  .post('/user/settings')
  .authenticate(user)
  .send(...)

This works fine. The problem now is to use the extension in a *.test.ts file.

As suggested in Extend Express Request object using Typescript, I try to create a file to use the typescript feature Declaration Merging.

// file location src/types/supertest

declare namespace supertest {
  export interface Test {
    authenticate(user: any): this; // I didn't put a type on user to simplify here.
  }
}

and also changed my tsconfig.json

{
  "compilerOptions": {

    ...

    "typeRoots": ["./src/types"],

    ...

  }
}

But when I run npx tsc

$ npx tsc
src/api/user.test.ts:51:8 - error TS2551: Property 'authenticate' does not exist on type 'Test'.

51       .authenticate(user);
          ~~~~~~~

Is there a way to fix this, use this aproach in typescript?

How to compare a text with accents in cypress?

I try to compare a text in cypress, my text has words with accents and it throws the following error ...

AssertionError Timed out retrying: expected '' to have text '\n Su lote de distribuci�n se ha creado correctamente, en breve sus comprobantes se enviar�n a sus respectivos destinatarios.\n', but the text was '\n Su lote de distribución se ha creado correctamente, en breve sus comprobantes se enviarán a sus respectivos destinatarios.\n

Performance tests with pytest: how to make delays and receive time info?

please, can anyone help me out on the following task (I'm new to testing API and python)

I have several tests but I need to create various delays (pauses) between requests. The measurement results should be displayed in stdout: for each combination (endpoint, delay between requests) I shoud reflect the 90th percentile of the application response time. How can this be correctly achived with pytest?

For example basic test:

def test_connection():
s = requests.Session()
r = s.get(url, headers=headers)
if r.status_code == 200:
    print('Authorization is successful')
else:
    print('Authorization failed with code:' + r.status_code)

Error CS0120 : I'm trying to make a label text change every X seconds

so basically the problem is that i am trying to make a "fake" and trolling program that considers to fix Visual Studio 2019 for my friend (he wants this idk why). Here's the code :

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
namespace VS_Error_Checker
{

    public partial class Form2 : Form
    {

        private System.Windows.Forms.Timer x = new Timer();
        public event EventHandler Tick;

        public Form2()
        {
            InitializeComponent();



        }


        private void pictureBox1_Click(object sender, EventArgs e)
        {

        }

        private void button1_Click(object sender, EventArgs e)
        {
            x.Tick += new EventHandler(TimerEventProcessor);

            label1.Text = "Corrupted Files : 15";
            x.Interval = 3000;
            label1.Text = "Corrupted Files : 34";
            x.Interval = 1500;
            label1.Text = "Corrupted Files : 40";
            x.Interval = 1500;
            label2.Text = "Corrupted registry keys : 15";
            x.Interval = 1500;
            label2.Text = "Corrupted registry keys : 27";
            x.Interval = 900;

            label3.Text = "Non-downloaded files : 14";

        }
        private static void TimerEventProcessor(Object myObject, EventArgs myEventArgs)
        {
            x.Stop();
        }
            private void timer1_Tick(object sender, EventArgs e)
        {
            progressBar1.Increment(1);

        }

    }
}

please help, i really need help because i am working on some others projects that uses the same code :(

Only build Angular 9 project once for package output and running tests

In our CI we currently run ng test before ng build. Since the upgrade to Angular this leads to the full project being compiled twice. Once for the tests and once for the build.

This is taking up about 2 minutes of extra time in our CI, but it is a redundant tasks.

Is it possible to run the jasmine tests against the compiled output?

Convert lxml.etree._element to xmlobject

I want to convert my lxml.etree._element into an XMLobject

xmlFilePath = "/data/equifax-voi-response.xml"
dom = ET.parse(newXMLFilePath)
XMLObject = xmltodict.parse(dom)

I have been doing this, but it is not working. Error message is:

parser.Parse(xml_input, True)
TypeError: a bytes-like object is required, not 'lxml.etree._ElementTree'

How to convert it?

bash 30 second delay when checking if file exists

I have a file system that is shared between several computers. I cannot open network connections between these computers so I have written a program to communicate via files. I have a server that checks a folder for the appearance of instruction files, it reads, executes and writes an output file and then creates a signal file to indicate the file is ready.

to check for a file with a while loop

while [[ ! -e $READY_FILE ]]
  do sleep 1
done
do something

The server gets and processes the file pretty much right away and makes the signal file but I am seeing a strange latency on the client side.

when the server and client are running on the same machine the latency is very low. When I run the client on a separate computer the latency is around 30 seconds.

time bash client.sh -f commands.txt
done

real    0m31.945s
user    0m0.048s
sys     0m0.314s

This is reproducible +/- 2 seconds.

I can kill the problem by making the client computer do anything with the working directory.

time bash client.sh -f commands.txt & sleep 5; ls $wd >/dev/null

real    0m5.120s
user    0m0.014s
sys     0m0.083s

time bash client.sh -f commands.txt & sleep 3.5; ls $wd >/dev/null

real    0m3.749s
user    0m0.011s
sys     0m0.055s

I can correct it in the program by changing the while loop to

while [[ ! -e $READY_FILE ]]
  do ls $wd >/dev/null
  sleep 1
done

Now i get

time bash client.sh -f commands.txt

real    0m1.075s
user    0m0.004s
sys     0m0.056s

My question is why is there a 30 second delay for the test [[ -e $READY_FILE ]] to detect the file?

Running Multiple Rspec Test metadata

I am trying to run multiple tests in Rspec using metadata tags. I usually run test like, RSpec -t case_id:1234, THis works fine however, I want to be able to run multiple case_ids. I tried methods like RSpec -t case_id:1234, case_id:4321 and -t case_id:1234 && -t case:id:4321 this just make two separate test run creating two separate environments. I want both tests to be able to run in the same environment.

GoLang - Is "run test" option a decorator provided by "testing" package?

In the below code snippet from vcode IDE(could not paste the option):

enter image description here

After importing testing package, we see run test, debug test, run package tests & run file tests option as hyper link.


To understand the mechanics behind it,

1) How are these options enabled? immediately after importing testing package

2) Are these options similar to python decorators?

Python:How can I input file in test function?

I have function where my first parameter is File_name, So in my test function I need to input my file_name ,so I did it manual.How can I automatically create the input file from within the test? I use "mockpatch".

Karate-UI Automation - How to press key without necessity to be in input field (feature file)

How I press key when I am on the page?. E.g.: I need to press "ESC" key or some key combination. In documentation there is description how do this when you are in input field - it works fine. But if I want to press the key button without using input field I am not successful (In feature file: I have tried for example this, but it did not work: driver.input(Key.ENTER) ).

Thank you.

how to write test function

''' i have found this test code and run in various code and found correct but i am not able to uderstand how it written and what this code block "linenum = sys._getframe(1).f_lineno " is doing and how it is doing it. so please me tell me the working of above shown code block'''

import sys

def test(did_pass): """ Print the result of a test. """ linenum = sys._getframe(1).f_lineno # Get the caller's line number. if did_pass: msg = "Test at line {0} ok.".format(linenum) else: msg = ("Test at line {0} FAILED.".format(linenum)) print(msg)

Using loop to fill Array with 5 given strings

I am looking for a way to fill in the string array when array length is given in a test function. For example when test gives length 100, I want these 5 string values to be printed throughout the array in loop so that it prints until the end. So far I managed to fill the array with first 5, however, I cannot find way to loop this through and test fails when higher array length is given. Thank you for any help!

public class Sid {
    public static String howMuchILoveYou(int nb_petals) {

        String Petals[]=new String[7];
        Petals[0]="zero index"; //cannot skip this for some reason
        Petals[1]="I love you";
        Petals[2]="a little";
        Petals[3]="a lot";
        Petals[4]="passionately";
        Petals[5]="madly";
        Petals[6]="not at all";


        return Petals[nb_petals].toString();
    }
}

TEST

import static org.junit.Assert.assertEquals;
import java.util.Random;
import org.junit.Test;

public class SampleTest {
    @Test
    public void test1() {
        assertEquals("I love you", Sid.howMuchILoveYou(1));
        assertEquals("a little", Sid.howMuchILoveYou(2));
        assertEquals("not at all", Sid.howMuchILoveYou(6));
        assertEquals("I love you", Sid.howMuchILoveYou(343));    //THIS ONE FAILS
    }
}

Karate-UI Automation - How to close Location allowance window (Chrome)

I am using Karate-UI Automation Software. I run my test scenario under Chrome browser. When I go to page where map is displayed (e.g. Mapbox) user is asked about Location allowance (screenshot) with buttons Allow and Deny. Is there some easy trick to confirm/deny/close dialog in scenario step? - in feature file.

location allowance

Thank you for your advice.

JavaMail Unit Test for Downloading Email Attachments


I have a method that connects to an outlook mail server, filters the inbox based on today's date and then downloads all of the attachments from those emails.
I am trying to write a unit test for this method but do not know where to begin. What exactly should I be testing with regards to my method and how would you suggest I go about doing the unit testing for this method?
Below is the method I want to test.

try {
        // Creating mail session.
        Session session = Session.getDefaultInstance(props, auth);

        // Get the POP3 store provider and connect to the store.
        store = session.getStore(PROTOCOL);
        store.connect(HOST, Username, Password);

        // Get folder and open the INBOX folder in the store.
        inbox = store.getFolder("INBOX");
        inbox.open(Folder.READ_ONLY);
        }
        catch(Exception e) {
            log.warn("Failed to connect to mailbox",e);
    }

try {
    //Searching the inbox for the specified search condition above
        Message[] messages = inbox.search(searchCondition);

        for (int i = 0; i < messages.length; i++) {

            Message message = messages[i];
            String subject = message.getSubject();
            String contentType = message.getContentType();
            String messageContent = "";
            String attachFiles = "";

           //The messages have to have the correct date for the attachments to be downloaded (correct date is today's date). 
            if(message.getSentDate().toString().contains(formattedDate) && message.getSentDate().toString().contains(formattedYear)) {
                System.out.println("Found message #" + i + ": " + subject);
                 Multipart multiPart = (Multipart) message.getContent();
                 int numberOfParts = multiPart.getCount();
                for (int partCount = 0; partCount < numberOfParts; partCount++) {

                    MimeBodyPart part = (MimeBodyPart) multiPart.getBodyPart(partCount);
                    if (Part.ATTACHMENT.equalsIgnoreCase(part.getDisposition())) {
                        // this part is attachment
                        String fileName = part.getFileName();
                        attachFiles += fileName + ", ";
                        part.saveFile(saveDirectory + File.separator + fileName);
                        } 
                    else {
                        // this part may be the message content
                        messageContent = part.getContent().toString();
                    }
                }

                if (attachFiles.length() > 1) {
                    attachFiles = attachFiles.substring(0, attachFiles.length() - 2);
                }
            } else if (contentType.contains("text/plain") || contentType.contains("text/html")) {
                Object content = message.getContent();
                if (content != null) {
                    messageContent = content.toString();
                }
            }
        }
   }catch(Exception e) {
       log.error("Failed to download attachments from email server",e);
   }

Run FailedRunner more than once in jenkins

I have a question about running my failed tests from cucumber on jenkins.

I currently have a "Failed Runner" running after all my tests are complete so in case a number of tests fail. The problem is sometimes a number of tests need to be re-run more than once because the container sometimes fails, how do I configure jenkins to run this runner more than once?

enter image description here

in selenium-cucumber-js how to run headless test

I am using selenium-cucumber-js library i have following issues

  1. want to check local-Storage in test how Can I do that?
  2. Am I able to access the window object?
  3. How can run this with Headless?

Thanks

lundi 30 mars 2020

Unable to create user and login inside of rails test

I was working on my rails application's tests and noticed some of my tests were failing after I added a login feature, since the views use the current user_id from the session variable, which was undefined during testing.

I tried to remedy this by creating a post request to create a user (a user can be a professor or a student for my app) and then to login with that user inside the test:

courses_controller_test.rb

setup do
    @course = courses(:one)
    @user = professors(:admin)
end

test "should get new" do

  professor_http_code = post professors_path, params:  {professor: {firstname:@user.firstname,
                                                     lastname: @user.lastname,
                                                     email: @user.email,
                                                     password: "123456",
                                                     password_confirmation: "123456"}}

  puts "Professor post http code: " + professor_http_code.to_s
  login_http_code = post login_path, params: {email: @user.email,
                                           password: "123456",
                                           type: {field: "professor"}}
  puts "Login post http code: " + login_http_code.to_s
  get new_course_url
  assert_response :success
end

The test fails with the same problem (no current user when rendering the view) and produces the following output in the console:

Console output

Running via Spring preloader in process 22449
Run options: --backtrace --seed 26071

# Running:

.Professor create params: <ActionController::Parameters {"firstname"=>"foo", "lastname"=>"bar", "email"=>"foobar@gmail.com", "password"=>"123456", "password_confirmation"=>"123456"} permitted: false>
Professor not saved to db
..Professor post http code: 200
user login params: #<Professor id: 135138680, firstname: "foo", lastname: "bar", email: "foobar@gmail.com", created_at: "2020-03-31 02:12:50", updated_at: "2020-03-31 02:12:50", password_digest: nil>
Login post http code: 500
F

Failure:
CoursesControllerTest#test_should_get_new [/home/sruditsky/Homework/Capstone/team-formation-app/test/controllers/courses_controller_test.rb:25]:
Expected response to be a <2XX: success>, but was a <500: Internal Server Error>

And here are my session and professor controller functions which are handling the requests:

Professors Controller

class ProfessorsController < ApplicationController
...

  def create
    @professor = Professor.new(professor_params)
    puts "Professor create params: " + params[:professor].inspect
    respond_to do |format|
      if @professor.save
        puts "password_d: " + @professor.password_digest
        log_in(@professor, "professor")
        format.html { redirect_to @professor, notice: 'Professor was successfully created.' }
        format.json { render :show, status: :created, location: @professor }
      else
        puts "Professor not saved to db"
        format.html { render :new }
        format.json { render json: @professor.errors, status: :unprocessable_entity }
      end
    end
  end
...

Sessions Controller

class SessionsController < ApplicationController

...
def create
    user = nil
    type = params[:type][:field]
    if type == "student"
      user = Student.find_by_email(params[:email])
    elsif type == "professor"
      user = Professor.find_by_email(params[:email])
    end
    puts "user login params: " + user.inspect
    if user && user.authenticate(params[:password])
      puts "logging in"
      log_in(user, type)
      redirect_to root_url, notice: "Logged in!"
    else
      puts "invalid password"
      flash.now[:alert] = "Email or password is invalid"
      render "new"
    end
  end

...

The console output shows that the professor is not being saved to the database, but creating a professor account on the application works fine, and also when I type the following into the rails console in the test env it works fine:

app.post "/professors", params: {professor: {firstname: "foo", lastname: "bar", email: "foobar@gmail.com", password: "123456", password_confirmation: "123456"}} 

I have tried adding a random authenticity_token to the params, hardcoding all the strings in the params instead of using the @user object, and dropping and recreating, migrating, loading, and preparing my test database and have had no luck.

Let me know if you need to see something else in my application to solve the problem, and any help would be super appreciated!

How to handle different response for the same request on Karate api testing?

I have a question about how to handle different responses for the same request in Karate api test. E.g. The same request: Given path '/tickets/2000' When method get Response: 1> if ticket #2000 is not expired, then match response = expected result 2> if ticket #2000 is expired, then matching response.error = 'Ticket is expired'

So how to match the 2 different results. I need to handle both. Can I use "Try... Catch", how to use it? Can you give me a syntax example in Karate, please?

Thanks

How to verify the the list of keys are present in the Json?

In my API testing I am using Jcontainer to Convert response to Json. Ex:

[Test]
public void GetUsersList()
{
    var response = us.UserList();
    JContainer jsonresponse = rh.ConvertResponseToJson(response);
}

I am trying to the following validation against the Json Verify if all Keys are present (If all keys in json are present, like id, timestamp, type etc..) Here is my json

[
  {
    "id": "aa0db615-d4cb-4466-bc23-0e0083002330",
    "timestamp": "2020-02-11T19:00:00-05:00",
    "type": 33554432,
    "info": "Full Synchronization request for all endpoints",
    "schedule": "once",
    "lastRun": null,
    "flags": 6,
    "creator": null,
    "isEditable": true,
    "location": 0,
    "duration": null
  },
  {
    "id": "70baa28c-e270-447b-b88a-20d30a9542db",
    "timestamp": "2020-02-11T19:00:00-05:00",
    "type": 33554432,
    "info": "Full Synchronization request for all endpoints",
    "schedule": "once",
    "lastRun": null,
    "flags": 6,
    "creator": null,
    "isEditable": true,
    "location": 0,
    "duration": null
  }
]

Here is my Convert respone to Json for reference

 public JContainer ConvertResponseToJson(HttpWebResponse response)
        {
            string localString;

            if (response.ContentEncoding.Contains("application/xml"))
            {
                // Convert the escaped Stream into an XML document.
                ConfigXmlDocument xmlDocument = new ConfigXmlDocument();
                xmlDocument.LoadXml(ConvertResponseStreamToString(response));

                // Now convert the properly-escaped JSON for the response into a JContainer
                localString = JsonConvert.SerializeXmlNode(xmlDocument);
            }
            else
                localString = ConvertResponseStreamToString(response);

            return JToken.Parse(localString) as JContainer;
        }

For now I created a model of the Json to read it by array index. But I am doing mutiple assetions to vaidate all keys. I want to just loop through them. Here is what i have so far

var response = us.UserList();
    JContainer jsonresponse = rh.ConvertResponseToJson(response);
 var castedModel = Jsonresponse.ToObject<IList<Model>>();
            Assert.IsNotNull(castedModel[0].info);  //This is repeated I am trying to avoid this
          Assert.IsNotNull(castedModel[0].task);
           Assert.IsNotNull(castedModel[0].timestamp)

How to select an element from an Auto suggestions in a search text box, in UFT using Device Replay?

So, I was trying to select the first element from auto suggestions displayed by partially entering some characters in it using DeviceReplay in UFT. Please help...

Mocking constructors using node and chai/sinon

I have a function like this

function buildToSend(repo) {
  const {
    name, ...data
  } = repo;
  return {
    msg: {
      application: data.name,
      date: new Date(),
    },
  };
}

And I need to test it but i really can't find out how to mock/stub the new Date() constructor.

Any ideas?

I already tried somethings like this but it didn't work.

    const date = new Date();
    const myStub = sinon.stub(Date.prototype, 'constructor').returns(date);
    const input = {
      name: 'name',
    };
    expect(utils.buildToSend(input)).to.deep.equal({msg: {name: 'name', date: 'THE DATE'}});

I'm missing something but i really don't know what. (of course, date is not getting called that way)

Auto-Restart for Spring Boot Tests

I am currently writing unit and integration tests for a Spring Boot application. I'm using Spring Tool Suites 4 for development.

When I run the application using Spring Tool Suites, the auto-restart works fine when I modify and save a file. I'm trying to find a similar way to run my tests.

I currently run the tests using a separate Windows CMD terminal using Maven:

mvn test

This runs one time and terminates. Is there anyway to have the tests run every time a test file is saved?

Adding a specific components to the Spring Context when creating a facade component

Rather than making some of my classes @Autowire many small @Components, I wanted to create a single @Component that would collect and just forward to the smaller @Components.

Coding itself isn't too difficult, but when it comes to testing, it becomes cumbersome because now I have to add it to the @ContextConfiguration(classes). Is there a way of doing it so I don't have to manage it individually?

Using @ComponentScan adds the whole package which I may not want because it will trigger more @MockBean to be created.

How to trigger a method with mocked service?

For testing my service, I am mocking some external services (mockserver). The problematic situation comes up when I need to trigger a method with a mocked response.

I have:

mock.when(request("some/path"))
    .respond(response().withStatusCode(200));

And I have:

   doSomething();

What I need: when mock returns 200, trigger doSomething() (java, void). Any input? Thanks!

What is better way to automate runing LTP test's on different Linux distros on cloud's with Python?

Task is automate running LTP test's asynchronous on different Linux distributive's on cloud's with Python and gother reports.

Splitting tests based on use cases in a single pom.xml

i have a single pom.xml which includes all Integration Tests. These tests run in alphabetical order

is there a way in maven to split the test suite into different groups within the same pom.xml and run the tests group wise?

Example: below is an extract of my current pom.xml

<build>
   <plugins>

      <plugin>
       <runOrder>alphabetical</runOrder>
       <configuration>
          <includes><!--All the Integration tests jumbled up-->
             <include><BackendFeatureA_IT></include>
             <include><FrontEndFeatureA_IT></include>
             <include>BackendFeatureB_IT</include>
             <include>FrontEndFeatureB_IT</include>
          </includes>
       </configuration>
       <executions>
          <execution>
             <goals>
                <goal>integration-test</goal>
                <goal>verify</goal>
             </goals>
          </execution>
       </executions>


      </plugin>

   </plugins>
</build>

Our build job takes in the above pom and executes all the tests alphabetically.

I would like organize the above pom into some thing like below:

<build>
       <backend-feature-integration-test>
                  <includes><!Include only backend-features IT->
                     <include><BackendFeatureA_IT></include>
                     <include>BackendFeatureB_IT</include>
                  </includes>
      </backend-feature-integration-test>

      <frontend-integration-tests>
                  <includes>
                     <include>FrontEndFeatureA_IT</include>
                     </include>FrontEndFeatureB_IT</include>
                  </includes>
       </frontend-integration-tests>
</build>

How do I subdivide a json schema into multiple schemas and not allow any properties to be part of the schema outside these other schemas?

Json Schema Validation Problem

I ran into a problem that I expect will be pretty common when building complex schemas. Suppose in this example that we want a Sample Schema to be an object with properties fooA1, fooA2, fooB1 and fooB2, but no other properties. We also want the advantage of being able to separate the subschemas into files fooA.json and fooB.json. How can we meet both of these requirements?

Main.json

{
  "title": "Sample Schema",
  "description": "Trying to combine two schemas",
  "allOf": [
    {"$ref": "classpath:JsonSchema/Common/fooA.json"}.
    {"$ref": "classpath:JsonSchema/Common/fooB.json"}
  ]
}

fooA.json

{
  "type": "object",
  "properties": {
    "fooA1": {
      "type": "integer"
    },
    "fooA2": {
      "type": "integer"
    }
  },
  "required": ["fooA1", "fooA2"]
}

fooB.json

{
  "type": "object",
  "properties": {
    "fooB1": {
      "type": "integer"
    },
    "fooB2": {
      "type": "integer"
    }
  },
  "required": ["fooB1", "fooB2"]
}

Unit Testing with Mockk, java.lang.ClassCastException: PhoneValidationKt$isPhoneValid$1 cannot be cast to kotlin.jvm.functions.Function1

Need some help in Unit testing my Kotlin funtion as follows, as i am new to unit testing, i had tried but failed,

My Kotlin top level function as follows,

package com.reprator.phone
//PhoneValidation.kt
const val PHONE_LENGTH = 10

fun isPhoneValid(
phoneNumber: String, successBlock: (() -> Unit) = {},
failBlock: (Int?.() -> Unit) = {}
) =
when {
    phoneNumber.isEmpty() ->
        failBlock(R.string.phone_validation_mobile_empty)
    phoneNumber.length < PHONE_LENGTH ->
        failBlock(R.string.phone_validation_mobile)
    else -> successBlock.invoke()
}

My Unit test code for the above method is as follows,

@Test
fun `Invalid Phone Number`() {
    mockkStatic("com.reprator.phone.PhoneValidationKt")

    val fn: (Int?) -> Unit = mockk(relaxed = true)

    val result = R.string.phone_validation_mobile
    every {
        isPhoneValid("904186605", failBlock = captureLambda())
    } answers {
        secondArg<(Int?) -> Unit>()(result)
    }

    isPhoneValid("904186605")

    verify { fn.invoke(result) }
}

The following is the error, i get while unit testing the code, as follows,

java.lang.ClassCastException: com.reprator.phone.PhoneValidationKt$isPhoneValid$1 cannot be cast to kotlin.jvm.functions.Function1

at com.reprator.phone.PhoneValidationKtTest$Invalid Phone Number$2.invoke(PhoneValidationKtTest.kt:31)
at com.reprator.phone.PhoneValidationKtTest$Invalid Phone Number$2.invoke(PhoneValidationKtTest.kt:11)
at io.mockk.MockKStubScope$answers$1.invoke(API.kt:2149)
at io.mockk.MockKStubScope$answers$1.invoke(API.kt:2126)
at io.mockk.FunctionAnswer.answer(Answers.kt:19)
at io.mockk.impl.stub.AnswerAnsweringOpportunity.answer(AnswerAnsweringOpportunity.kt:13)
at io.mockk.impl.stub.MockKStub.answer(MockKStub.kt:54)
at io.mockk.impl.recording.states.AnsweringState.call(AnsweringState.kt:16)
at io.mockk.impl.recording.CommonCallRecorder.call(CommonCallRecorder.kt:53)
at io.mockk.impl.stub.MockKStub.handleInvocation(MockKStub.kt:263)
at io.mockk.impl.instantiation.JvmMockFactoryHelper$mockHandler$1.invocation(JvmMockFactoryHelper.kt:25)
at io.mockk.proxy.jvm.advice.Interceptor.call(Interceptor.kt:20)
at com.reprator.phone.PhoneValidationKt.isPhoneValid(PhoneValidation.kt:15)
at com.reprator.phone.PhoneValidationKt.isPhoneValid$default(PhoneValidation.kt:7)
at com.reprator.phone.PhoneValidationKtTest.Invalid Phone Number(PhoneValidationKtTest.kt:26)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:230)
at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:58)

The following are the links, that i referred to try those unit testing as follows,

Link 1 Link 2 Link 3 Link 4

Fitnesse: Could not invoke constructor for CodecastPresentation[0]

I am trying to follow the CleanCoder Applied series by Uncle Bob Martin. He is using Fitnesse testing framework for the project. Github Repo

I am trying to emulate the Fitnesse test.

My CodeCastPresentation class is shown below:

package cleancoderscom.fixtures;

public class CodecastPresentation {
  public boolean loginUser(String username) {
    return false;
  }

  public boolean createLicenseForViewing(String user, String codecast) {
    return false;
  }

  public String presentationUser() {
    return "TILT";
  }

  public boolean clearCodecasts() {
    return false;
  }

  public int countOfCodecastsPresented() {
    return -1;
  }
}

My Setup Fitnesse script is as follows: FitnesseRoot/CleanCoders/Setup/context.txt

|import|
|cleancoderscom.fixtures|

|library|
|codecast presentation|

My class script is as follows: FitnesseRoot/CleanCoders/context.txt

!define TEST_SYSTEM {slim}
!path out/production/cleancoderscom
!contents

FitnesseRoot folder structure:

FitnessRoot\CleanCoders                                                                                                                   
|   content.txt                                                                                                         
|   properties.xml                                                                                                      
|                                                                                                                       
+---EpisodeOnePresentCodeCasts                                                                                          
|   |   content.txt                                                                                                     
|   |   properties.xml                                                                                                  
|   |                                                                                                                   
|   +---PresentCodecasts                                                                                                
|   |       content.txt                                                                                                 
|   |       properties.xml                                                                                              
|   |                                                                                                                   
|   +---PresentNoCodeCasts                                                                                              
|   |       content.txt                                                                                                 
|   |       properties.xml                                                                                              
|   |                                                                                                                   
|   \---ScenarioLibrary                                                                                                 
|           content.txt                                                                                                 
|           properties.xml                                                                                              
|                                                                                                                       
\---SetUp                                                                                                                       
content.txt                                                                                                             
properties.xml    

Error:

  1. Could not invoke constructor for CodecastPresentation[0] for Setup.library

  2. The instance scriptTableActor.clearCodecasts. does not exist for CodeCastPresentation

Note: 1. Using IntelliJ and Windows 2. My project is built by IntelliJ and my output directory is as follows:

<project root>
└───out
    └───production
        └───cleancoderscom
            └───fixtures

why bash does not exit on test syntax error?

I would like a script to exit on a syntax error occurring on a test, but no luck :

bash -n "$0" || exit  # check script syntax

#set -xv
set -o nounset
set -o errexit
set -o pipefail # make pipes fail if one of the piped commands fails

if (( != 0 )); then  # syntax error here
  echo "after if"
fi

echo "should not reach this point, but indeed does"

output is :

./testscript: line 8: ((: != 0 : syntax error: operand expected (error token is "!= 0 ")
should not reach this point, but indeed does

any solution ? thanks

This is test This is test

This is test

This is test

This is test

This is test

This is test

This is test

This is test

This is test

This is test

This question body does not meet our quality standards. Please make sure that it completely describes your problem - including what you have already tried - and is written using proper grammar. code

Test concurrency in Jest

I would like to test some endpoints I have in my REST API in a concurrent way using Jest. The idea behind is to check if some queries in the database clash each other and set a proper isolation level. How can I achieve such a test in Jest? Would Promise.all help in this case?

Spring Controller Test using actual data during bean validation

I have a controller method for creating a user like below.

@Secured({RoleNames.ADMIN})
@PostMapping(value = "/users/create")
public ResponseEntity<UserDto> createUser(@Valid @RequestBody CreateUserCommand command) {
    UserDto userDto = userService.create(command);
    return ResponseEntity.ok().body(userDto);
}

I need to validate CreateUserCommand (with holds form data come from front-end). For example whether email already exists in database.

I created a validator like below:

public class UserEmailExistsValidator implements ConstraintValidator {

@Autowired
private UserService userService;

public UserEmailExistsValidator(UserService userService) {
    this.userService = userService;
}

@Override
public boolean isValid(CreateUserCommand command, ConstraintValidatorContext constraintValidatorContext) {
    UserDto userDto = userService.getByEmail(command.getEmail());
    return userDto == null;
}

}

My problem is that: when I try to run Controller Test for this "users/create" endpoint, it checks actual database (not test database) whether e-mail exists that user. So if that user e-mail exists in database, validations fail and therefore tests are failing.

I am using Spring Boot 2.1.8, Mockito3.0.0.

Selenium - how to get text when element contains text + element

I have elements like this:

<div id="x">
"abc"
<strong>xyz</strong>
"def"
</div>

I am trying:

getDriver().findElement(By.id("x")).getText()

Result is empty string.

Flutter app testing with docker and Postgres

I'm new to creating an app, and I want to know how to test my backend api in android studio. I have installed docker and Postgres. How do I connect it to android studio and view it on the simulator?

Test case written for url resolver using resolve() in django application not working

I am trying to write test cases for simple user login and registration system in django. First, I was thinking of writing test cases for the urls. The only test case I have written so far is

from django.test import SimpleTestCase
from django.urls import reverse, resolve, path
from main.views import homepage, register, login_request, logout_request
import json

# Create your tests here.

class TestUrls(SimpleTestCase):

      def test_list_is_resolved(self):
          url = reverse('homepage')
          self.assertEquals(resolve(url).func,homepage)

The default urls.py is

 from django.contrib import admin
 from django.urls import path, include

 urlpatterns = [
      path('tinymce/',include('tinymce.urls')),
      path("",include('main.urls')),
      path('admin/', admin.site.urls),
           ]`

The main application urls.py is

 from django.urls import path
 from . import views

 app_name='main' # here for namespacing the urls

 urlpatterns = [
      path("", views.login_request, name="login"),
      path("homepage/",views.homepage, name="homepage"),
      path("register/", views.register,name="register"),
      path("logout", views.logout_request, name="logout"),
        ]`

Now every time I am running the tests, I am getting the following error.

(myproject) C:\Users\rohan\mysite>py manage.py test
System check identified no issues (0 silenced).
E
======================================================================
ERROR: test_list_is_resolved (main.tests.test_urls.TestUrls)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Users\rohan\mysite\main\tests\test_urls.py", line 11, in test_list_is_resolved
    url = reverse('homepage')
  File "C:\Users\rohan\Envs\myproject\lib\site-packages\django\urls\base.py", line 87, in reverse
   return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs))
  File "C:\Users\rohan\Envs\myproject\lib\site-packages\django\urls\resolvers.py", line 677, in 
_reverse_with_prefix
  raise NoReverseMatch(msg)
django.urls.exceptions.NoReverseMatch: Reverse for 'homepage' not found. 'homepage' is not a valid 
view 
function or pattern name.

----------------------------------------------------------------------
 Ran 1 test in 0.015s

FAILED (errors=1)

I am not being able to find any error. What is wrong here?

Spock Spy/Mock not registering the invocations

I have a method in my test class that just calls two other methods. I am trying to write a test that checks that those two methods are actually invoced, but no invocations are registered. Java code I'm testing:

    public void populateEdgeInfo(Map<Actor, SchedulableNode> knownNodes) {
        populateDestinationInfo(knownNodes);
        populateSourceInfo(knownNodes);
    }

My test code:

def "Populating edge info means both source and destination information will be populated" () {
    given:
    actor.getDstChannels() >> []
    actor.getSrcChannels() >> []
    SchedulableNode schedulable = Spy(SchedulableNode, constructorArgs: [actor])

    when:
    schedulable.populateEdgeInfo([:])

    then:
    1 * schedulable.populateDestinationInfo(_)
    1 * schedulable.populateSourceInfo(_)
}

The only thing registered is the call to populateEdgeInfo. Is there something obvious that I am doing wrong? Also tried using Mock instead of Spy to no avail.

dimanche 29 mars 2020

Laravel 5.6 testing Notification::assertSentTo() not found

Struggling since multiple days to get Notification::assertSentTo() method working in my feature test of reset password emails in a Laravel 5.6 app, yet receiving ongoing failures with following code:

namespace Tests\Feature;

use Tests\TestCase;
use Illuminate\Auth\Notifications\ResetPassword;
use Illuminate\Support\Facades\Notification;
use Illuminate\Foundation\Testing\WithFaker;
use Illuminate\Foundation\Testing\RefreshDatabase;

class UserPasswordResetTest extends TestCase
{
   public function test_submit_password_reset_request()
   {
      $user = factory("App\User")->create();

      $this->followingRedirects()
         ->from(route('password.request'))
         ->post(route('password.email'), [ "email" => $user->email ]);

      Notification::assertSentTo($user, ResetPassword::class);
   }

}

I have tried several ideas including to use Illuminate\Support\Testing\Fakes\NotificationFake directly in the use list. In any attempt the tests keep failing with

Error: Call to undefined method Illuminate\Notifications\Channels\MailChannel::assertSentTo()

Looking forward to any hints helping towards a succesful test. Regards & take care!

Micronaut mock repository interface with Replace annotation

I have a repository implementation of type CrudRepository. Im trying to mock this interface with another interface that extends of it.

I noticed that if I invoke findAll method it works as expected, but when I invoke the findById method I get an error like this:

Micronaut Data method is missing compilation time query information. Ensure that the Micronaut Data annotation processors are declared in your build and try again with a clean re-build.
java.lang.IllegalStateException: Micronaut Data method is missing compilation time query information. Ensure that the Micronaut Data annotation processors are declared in your build and try again with a clean re-build.
    at io.micronaut.data.intercept.DataIntroductionAdvice.intercept(DataIntroductionAdvice.java:97)

This class is class to be mocked

@Repository
interface RepositoryHibernate: CrudRepository<Entity, Long>

This is the class mocked

@Replaces(RepositoryHibernate::class)
abstract class RepositoryCrudMock: RepositoryHibernate {

    val elements = mutableListOf(test1, test2, test3, test4)

    override fun findAll(): MutableIterable<Entity> {
        return elements
    }

    override fun findById(id: Long): Optional<Entity> {
        return when(id) {
            1L -> Optional.of(test1)
            2L -> Optional.of(test2)
            3L -> Optional.of(test3)
            4L -> Optional.of(test4)
            else -> Optional.of(Entity())
        }
    }
}

Linking online network server between software and application without web

I'm trying to make my network server online and link it to a system and an application that I designed. However I don't want to make a website to upload the server to. Is this possible? Can I link my server directly to my software and my application via the internet? Thanks.

Testing with a metaoperator doesn't print the test description

I was writing tests on Complex arrays and I was using the Z≅ operator to check whether the arrays were approximately equal, when I noticed a missing test description.
I tried to golf the piece of code to find out the simplest case that shows the result I was seeing. The description is missing in the second test even when I use Num or Int variables and the Z== operator.

use Test;

my @a = 1e0, 3e0;
my @b = 1e0, 3e0;
ok @a[0] == @b[0], 'description1';     # prints: ok 1 - description1
ok @a[^2] Z== @b[^2], 'description2';  # prints: ok 2 -

done-testing;

Is there a simple explanation or is this a bug?

How to create mock model in laravel testing

I need to test a function in UserController :

public function CreateUser(Request $request): Response
{    
    $user = User::firstOrCreate(['device_id' => $request->device_id]);
    $token = Auth::login($user);
    return Response(['status'=> 'user created successfully'],200);
} 

and I create a test function same as following:

public function testLoginGuest()
{
    $mockUser = Mockery::mock(new App\User());
    $this->app->instance(App\User::class, $mockUser);
    $this->post(route('user_create'), ['device_id' => 'REC00ER']);
    ...
}

but this function create a real row in database. how to i can mock database for this request ?

Fake an Observer for Testing?

I'm building a Test to test that a monthly subscription can be made through an api endpoint. I have a subscription observer that assigns the user a role based on their subscription when a new subscription is created, and this is giving me some trouble with testing.

I have a SubscriptionObserver which has the following:

/**
 * Handle the subscription "created" event.
 *
 * @param  \App\Subscription  $subscription
 * @return void
 */
public function created(Subscription $subscription)
{
    ($subscription->stripe_plan == 'monthly') ? auth()->user()->assignRole('basic-user') : auth()->user()->assignRole('premium-user');
}

My test is:

/** @test */
public function it_can_create_a_monthly_subscription()
{

    $data = [
      'plan' => 'monthly',
      'payment' => 'pm_card_visa',
    ];

    $response = $this->actingAs($this->unsubscribedUser, 'api')->post('api/subscriptions', $data);


    $response
    ->assertSuccessful()
    ->assertJsonStructure([
      "subscription_created",
      "subscription" => [
          "name",
          "stripe_id",
          "stripe_status",
          "stripe_plan",
          "quantity"
      ]
    ]);

}

The error i'm getting is:

1) Tests\Feature\SubscriptionTest::it_can_create_a_monthly_subscription
Error: Call to a member function assignRole() on null

Any suggestions on how I can resolve this?

jest testing discord bot commands

So I have a file that I use module exports on and it has 4 fields among which an execute field that takes 2 args and is essentially a function. It doesn't return anything instead it uses discord.js and runs this message.channel.send('Pong');. I want to test this using jest How do I: 1 - Make sure that the message.channel.send was called with 'Pong' as args 2 - How do I mock it so it doesnt actually call it (i just want to make sure that the text inside of it, like the actual argument is 'Pong' since calling it won't work due to the lack of a proper message object)

I can access the actual command and execute it but I am unsure as to how to check the contents of message.channel.send. The message object cannot be reconstructed by me so that might also need mocking.

I'm using discord.js but that shouldn't really matter.

I will also have to test commands that feature functions that do have returns so how should I go about them?

Can i send real file using MockMultipartFile?

I have simple controller.

@PostMapping()
public Integer uploadFile(MultipartFile file) throws IOException {
    return service.readFileFromExcel(file);
}

And i want to write integration test for it. I've read about MockMultipartFile and i've seen some examples, but they are too simple like

MockMultipartFile file = new MockMultipartFile("file", "hello.txt", MediaType.TEXT_PLAIN_VALUE, "Hello, World!".getBytes());

But do I have some way to send real file instead "Hello, World!".getBytes() from resourses directory?

Questioning about Contra-variance in TDD

I am currently learning TDD and i was wondering about the real meaning of Uncle Bob's sentence about refacto step in connection with TDD. The subject talks about Test Contra-variance in TDD and it comes from his Clean Coder Blog.

Context : Suppose I begin writing a new class. Call it X. I first write a new test class named XTest. As I add more and more unit tests to XTest I add more and more code to X and i refactor that code by extracting private methods from the original functions that are called by XTest.

Then i have to refactor the tests too. (this is where i have a misunderstanding)

About this step, Uncle Bob Said :

I look at the coupling between XTest and X and I work to minimize it. I might do this by adding constructor arguments to X or raising the abstraction level of the arguments I pass into X. I may even impose a polymorphic interface between XTest and X.

My questions are : How to identify coupling ? What does he mean by "adding constructor arguments to X or raising the abstraction level of the arguments I pass into X." and "polymorphic interface between XTest and X."

A sample code would be very welcome !! :):)

Link to the blog article in question : https://blog.cleancoder.com/uncle-bob/2017/10/03/TestContravariance.html

Thank in advance.

can we open another chrome browser through selenium web driver in a single test if yes could you please provide me solution

can we open another chrome browser through selenium web driver in a single test if yes could you please provide me solution.

What are the benefits of using Cypress.io from inside an Angular application vs from outside?

To make it more clear, here is what I exactly mean:

I sat up an angular project, a simple login/register angular app. Then I installed Cypress inside it, here is how the strucure looks like:

enter image description here

I run npx cypress run and all the tests run as expected.

Now, for experimenting purposes I installed a cypress on a stand-alone folder, and ran the same tests from there:

enter image description here

I run npx cypress run from the stand-alone new Cypress folder and the tests also behave as expected.

My question is, what is the different between the two setups? Are there any benefits from using Cypress from inside an Angular project vs from a stand-alone folder?

SPRING JUnit error with dependence jpa and thymeleaf started

I have a problem with my project when I use JUnit to check the tests, if I remove the dependencies from thymeleaf and jpa it works for me but I need them, because of this I can't export my project in .jar, I don't understand why it can't find the contextLoad. Thank you in advance :)

enter image description here

Python import error "module 'factory' has no attribute 'fuzzy'"

I'm a fresher in factory_boy. In my code, I import factory and then used this import to access the fuzzy like factory.fuzzy then it shows error as module 'factory' has no attribute 'fuzzy'.

I solved this problem by again importing like this
import factory from factory import fuzzy

by doing so there were no errors.

What is the reason for this!

How to set PYTHONHASHSEED environment variable in PyCharm for testing Word2Vec model?

I need to write a fully reproducible Word2Vec test, and need to set PYTHONHASHSEED to a fixed value. This is my current set-yp

# conftest.py
@pytest.fixture(autouse=True)
def env_setup(monkeypatch):
    monkeypatch.setenv("PYTHONHASHSEED", "123")

# test_w2v.py

def test_w2v():
    assert os.getenv("PYTHONHASHSEED") == "123"
    expected_words_embeddings = np.array(...)
    w2v = Word2Vec(my_tokenized_sentences, workers=1, seed=42, hashfxn=hash)
    words_embeddings = np.array([w2v.wv.get_vector(word) for word in sentence for sentence in my_tokenized_sentences)])
    np.testing.assert_array_equal(expected_words_embeddings, words_embeddings)

Here is the curious thing.

If I run the test from the terminal by doing PYTHONHASHSEED=123 python3 -m pytest test_w2v.py the test passes without any issues. However, if I run the test from PyCharm (using pytest, set up from Edit Configurations -> Templates -> Python tests -> pytest) then it fails. Most interestingly, it doesn't fail at assert os.getenv("PYTHONHASHSEED") == "123", but it fails at np.testing.assert_array_equal(expected_words_embeddings, words_embeddings)

Why could this be the case, and is there a way to fix this issue?

How can I resolve conflicting actor systems while testing akka-http and akka actors at the same spec file?

I have a Route defined using akka-http that uses an actor inside to send messages. My route looks like this:

      path("entity") {
        post {
          entity(as[Enrtity]) {
            entity =>
              val result: Future[Message] = mainActor.ask {
                ref: akka.actor.typed.ActorRef[Message] =>
                  Message(
                    entity = entity,
                    replyRef = ref
                  )
              }
              complete("OK")
          }
        }
      }

My test spec:

class APITest
    extends ScalaTestWithActorTestKit(ManualTime.config)
    with ScalatestRouteTest
    with AnyWordSpecLike {
      val manualTime: ManualTime = ManualTime()
     // my tests here ...
}

Compiling the test fails since there are conflicting actor systems:

class APITest inherits conflicting members:
[error]   implicit def system: akka.actor.typed.ActorSystem[Nothing] (defined in class ActorTestKitBase) and
[error]   implicit val system: akka.actor.ActorSystem (defined in trait RouteTest)

Overriding the actor system doesn't help either since the inherited actor systems are of both typed and untyped ones. How can I resolve this easily?

samedi 28 mars 2020

Mocking multiple API calls in my class and writing a test

So, I am trying to mock some API's in my class, the code looks something like this.

import requests
class myclass:
  def A(self, data):
    response = requests.get("some_url", params)
    if response.data["has_value"]:
      new_response = requests.get("some_url", params)
      **do some validation on data recieved**
  def B(self, data):
    response = requests.get("some_url", params)
    **do some validation on data recieved**
  def _run(self):
    **some code**
    self.A(data)
    self.B(data)

m = myclass()
m.run()

I am trying to test for these and need some help with it. while doing validations we change some fields in data, and I have to verify if the data is correct. how can this be done? Thank you.

Is cypress support cross browser testing like selenium or is there any limitations

I am wondering is the cypress support many browser types and version to implement cross browser testing. As they mentioned at there documentation [1] they do support Chrome-family browsers (including Electron) and beta support for Firefox browsers.

But do they support internet explorer, safari and other version of chrome and Firefox. If not is there any alternative way to implement such facility. (external plugin or something). I have tried the Applitools Ultrafast Grid.[2] But their configurations (APPLITOOLS_API_KEY) mentioned only for windows and mac OS. I'm implementing the project at Ubuntu OS.

And also cross browser test should run in the header less mode.

[1] https://docs.cypress.io/guides/guides/cross-browser-testing.html#Continuous-Integration-Strategies

[2] https://applitools.com/blog/cypress-cross-browser-testing?utm_referrer=https://www.google.com/

Easier way of testing custom collection which works exactly same as official legacy ones

Question

  • What is the easiest way of testing custom collection that works exactly same as corresponding unique legacy ones such as java.util.LinkedList ?

More detail

  • Every single methods and other utilities match fully with corresponding legacy ones.

How do I run unit tests on two different file formats?

I need to test a system that works identically with YAML and JSON file formats. I wrote up a bunch of unit tests for the database backend but I want to run them on both formats. All I need to change is the path provided for the tests. I'm using Java 8 and org.junit.jupiter.

import static org.junit.jupiter.api.Assertions.*;

public class DatabaseTests {

    //Need to re-test for "src\\test\\java\\backend\\database\\testDB.yaml"
    private final static String TEST_DB_JSON = "src\\test\\java\\backend\\database\\testDB.json";

    private static byte[] testFileState;

    @BeforeAll
    static void setUp() {
        try {
            testFileState = Files.readAllBytes(Paths.get(TEST_DB_JSON));
            reloadDatabase();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    @AfterEach
    void resetFile() {
        try (FileOutputStream fos = new FileOutputStream(TEST_DB_JSON)) {
            fos.write(testFileState);
        } catch (IOException e) {
            e.printStackTrace();
        }
        reloadDatabase();
    }

    //A bunch of unit tests

I dont want to just copy and paste the whole class and change just one variable but I cant figure out how to do this by making the class abstract or something. The tests work identically on both files (as does my database code) and both files contain the same exact same test data.

chai js oneOf / justify message not clear

I want to test with multiple correct answers, I tried both oneOf and satisfy, but the error message is like 'expected X to be one of [ Array(2) ] ' or 'expected X to satisfy function () => {}'

I want the message to include the real values, not 'Array' or 'function'.

how can I make it?

is there any option to include custom message with all this values or another solution?

thanks!

How to run automate cljs tests in cider?

When I run tests in my core_test.clj file in cider, the tests run with no problem whenever I load a buffer, since I have set (cider-auto-test-mode 1). But this doesn't work with the cljs file. I have the following code in core_test.cljs

(ns myapp.core-test
  (:require
   [cljs.test         :refer-macros [is deftest]]))

(deftest my-test
  (testing "Arithmetic"
    (testing "with positive integers"
      (is (= 4 (+ 2 2)))
      (is (= 7 (+ 3 4))))
    (testing "with negative integers"
      (is (= -4 (+ -2 -2)))
      (is (= -1 (+ 3 -4))))))

And upon doing C-k in core.cljs. The test doesn't run. Why is this and how can I make cljs tests automatic too, just like clj tests?

Test word against array for anagrams - Javascript

So far.. I have this:

function anagrams(word, words) {
  for(let i = 0; i <= words.length; i++){
  const aCharMap = buildCharMap(word);
  const bCharMap = buildCharMap(words[i]);

  if(Object.keys(aCharMap).length !== Object.keys(bCharMap).length) {
    words.pop(words[i])
  }
  for (let char in aCharMap) {
    if (aCharMap[char] !== bCharMap[char]) {
    words.pop(words[i]);
    }
  }
  console.log(word);
  console.log(words);
  }
}

  function buildCharMap(str) {
  const charMap = {};
  for (let char of str.replace(/[^\w]/g, '').toLowerCase()) {
  charMap[char] = charMap[char] + 1 || 1;
  }
  return charMap;
}

The question at hand is obvious if you read through the code but here it is

Write a function that will find all the anagrams of a word from a list. You will be given two inputs a word and an array with words. You should return an array of all the anagrams or an empty array if there are none. For example:

anagrams('abba', ['aabb', 'abcd', 'bbaa', 'dada']) => ['aabb', 'bbaa']

anagrams('racer', ['crazer', 'carer', 'racar', 'caers', 'racer']) => ['carer', 'racer']

anagrams('laser', ['lazing', 'lazy', 'lacer']) => []

Using db VIEW in tests

In my spring-boot web app i have data model described follow next sql code (two entities and view)

create table ref_street (
  id bigint not null default nextval('ref_street_seq') primary key,
  name character varying(255)
);

create table op_client (
    id bigint not null default nextval('op_client_seq') primary key,
    fio character varying(255),
    street_id bigint not null
    ...
);

alter table op_client add constraint fk_op_client_ref_street foreign key(street_id) references ref_street(id);

create or replace view v_view_client as
select
    oc.id id,
    oc.fio fio,
    rs.id street_id,
    rs."name" street_name,
from
    op_client oc
left join ref_street rs on oc.street_id = rs.id;

In my application domain model i have OpClient entity with spring-jpa repository looks like this

@Data
@EqualsAndHashCode(callSuper = true)
@Entity
@Table(name = "op_client")
@SequenceGenerator(name = "default_gen", sequenceName = "op_client_seq", allocationSize = 1)
public class OpClient extends AbstractEntity<Long> {
  @Column(name = "user_id")
  private Long userId;

  @Column(name = "fio")
  private String fio;

  @Column(name = "street_id")
  private Long streetId;

...

  @CreatedDate
  @Column(name = "create_dt")
  private LocalDate createDate;
}

@Repository
public interface OpClientRepository extends JpaRepository<OpClient, Long>

}

And the ViewCient entity and repository looks like this


@Entity
@Data
@EqualsAndHashCode(callSuper = true)
@Table(name = "v_view_client")
public class ViewClient extends AbstractEntityWithManualId<Long> {

  @Column(name = "fio")
  private String fio;

  @Column(name = "street_id")
  private Long streetId;

  @Column(name = "street_name")
  private String streetName;
}

@Repository
public interface ViewClientRepository extends JpaRepository<ViewClient, Long> {

}

In my test case i am using Postgresql Embedded Engine com.opentable.components.otj-pg-embedded

The question is why if a'm using OpClientRepository for successfuly inserting data into op_client table my viewClientRepository return empty list? My test case code looks like this:

@RunWith(SpringRunner.class)
@SpringBootTest(classes = {DomofonContextMemoryDb.class})
@ActiveProfiles("test")
public class ClientServiceTest {

  @Autowired
  private ClientService clientService;

  @Autowired
  private ViewClientRepository viewClientRepository;

  @Test
  public void addClientTest() {
    OpClient client = new OpClient();
    client.setFio("Test Test Test");
    client.setStreetId(1L);
    clientService.addClient(client);

    assertThat(viewClientRepository.findAll()).hasSizeGreaterThan(0);
  }

than throw assert exception.

Spock parameterised test show parameter values in failure results

I'm just testing out parameterised tests for more or less the first time.

I'm surprised that when you get a test failure you can't actually see what the values of the parameters which caused the failure are.

Is there some setting which enables this?

Can't apply migrations in Django SQLlite memory database

I'm writing some tests, and I want to be able to run django server with my test settings (that's why I'm using in-memory database).

It seems to be working, no errors reported when running. But migrations are not applied - I can't perform any action on the database, because model tables do not exist.

When I run python manage.py migrate, all my migrations get applied (I see these Applying migrations... OK messages), but it has no effect. When I run python manage.py showmigrations, none of the migrations are applied (I see [ ] 0001_initial etc., without the X).

When I go to django shell, I can't perform any action, because table does not exist. Any idea what might be the reason? It works fine with normal, postgres database.

My settings:

DEBUG = True

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.sqlite3',
        'NAME': ':memory:',
        'TEST_NAME': ':memory:',
    },
}

CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
        'LOCATION': ''
    }
}

Zed A.Shaw ex48

I am currently going through Zed A. Shaw's book Learn Ruby the hard way and I am having trouble understanding exercise 48. What I don't understand is this piece of test code:

class LexiconTests < Test::Unit::TestCase

Pair = Lexicon::Pair
@@lexicon = Lexicon.new()

def test_directions()
assert_equal([Pair.new(:direction, 'north')], @@lexicon.scan("north"))
result = @@lexicon.scan("north south east")
assert_equal(result, [Pair.new(:direction, 'north'),
             Pair.new(:direction, 'south'),
             Pair.new(:direction, 'east')])
end

Why do we need to use Pair = Lexicon::Pair? What does this piece of code create?

Cow and Bull python program test

Hi Guys, hope you are all doing fine, I just wrote a python program, can you tell me is there anything wrong with it, or is it okay ?


to ensure the random number is non-repeated digits

num_list=[]

while len(num_list) < 4:

rnd = random.randint(0,9)

if rnd not in num_list:

num_list.append(rnd)

continue

random_num = int("".join(map(str,num_list)))

random_num_str = str(random_num)

actual_game_starts_here

guesses = 0

playing = True

def compare_num(guess,random_num_str):

i = 0

cow_bull = [0, 0]

for i in range(len(guess)):

    if random_num_str[i] == guess[i]:

        cow_bull[0]  += 1

    else:

        for j in range(len(random_num_str)):

            if guess[i] == random_num_str[j]:

                cow_bull[1] +=1

return  cow_bull

print("Let's play a game of Cowbull!") #explanation

print("I will generate a number, and you have to guess the numbers one digit at a time.")

print("For every number in the wrong place, you get a bull. For every one in the right place, you get a cow.")

print("The game ends when you get 4 cows!")

print("Type exit at any prompt to exit.")

while playing:

guess = input("Guess your best number: ")

if guess == "exit":

    print(f"I guessed the number {random_num_str}")

    break

cowbull = compare_num(guess,random_num_str)

guesses +=1

print(f"You got {cowbull[0]} cows and {cowbull[1]} bull")

if cowbull[0] == 4:

    playing = False

    print("You won the game lad, after " + str(guesses) + " guesses, idiot")

    print(f"I guessed the number {random_num_str}")

    break

else:

    print("Try again bro!")

How can I share test code between Rust crates in a workspace?

I have several crates in a workspace. One crate define a Trait, that others implement. I would like to write a few test functions that just take the Trait and ensure all the invariants are always true, and that sample code work with all instances of the Trait. So I'd like to define a test suite, and each other crate should say "I define my tests as being this test suite, with my own implementation of the Trait". Is that possible?

I suppose I could define a macro in my library that generates all the tests using the Trait instance, but that would mix production and test code in my library. Can another crate in my workspace reference a test module present in the tests folder of my main crate?

So basically I have:

workspace
|-- crate1
    |-- src
        |-- lib.rs
    |-- tests
        |-- harness.rs
|-- crate2
    |-- src
        |-- lib.rs
    |-- tests
        |-- test2.rs

And I would like test2.rs to be able to use harness.rs. Is that possible?

Selenium: How do I navigate to a new page and do assertions there?

I'm new to Selenium. I'm currently trying to test a web application.

I want to click a button on the start page, which navigates me to a login page. There I'd like to enter my credentials and click the login button. This leads me to a page where I'd like to assert some things.

The problem is, that whenever I've navigated to a new page, I the test doesn't work as expected. Example: I get to the Login Page, but then the input fields aren't filled in. If I go to the Login page directly, it works.

I assume this is a test case that many people will have had, so there must be a simple solution to it, but I just can't manage to find one. I'd really appreciate any help a lot.

Here is my test code:

public class AdminTest {
    protected WebDriver driver; 

    @Before
    public void setUp() throws Exception {
        System.setProperty("webdriver.chrome.driver", "/usr/bin/chromedriver");
        ChromeOptions chromeOptions = new ChromeOptions();

        this.driver = new ChromeDriver(chromeOptions);    

    }

    @After
    public void tearDown() throws Exception {
        if (driver != null) {
            driver.quit();
        }
    }

    @Test
    public void viewOrderListTest() {

        // this leads to the start page
        this.driver.get("http://localhost:8080/eStore/");
        this.driver.manage().window().maximize();

        // I click the link to the Login page, so far so good
        this.driver.findElement(By.id("link_to_login")).click();

        this.driver.manage().timeouts().implicitlyWait(3, TimeUnit.SECONDS);

        // the keys are NOT entered
        WebElement username = this.driver.findElement(By.id("username"));

        username.click();
        username.clear();
        username.sendKeys("admin");

        WebElement password = this.driver.findElement(By.id("password"));
        password.click();
        password.clear();
        password.sendKeys("admin");

        // This button gets clicked by the test
        this.driver.findElement(By.id("submit_button")).click();

AndroidStudio Cucumber Feature Path Error

I'm trying to start using Cucumber with Android Studio and after checking a lot of documentation and related posts I don't get rid of an error that is driving me mad. This is the one:

java.lang.IllegalArgumentException: path must exist: /src/androidTest/java/com/example/tetris/cucumber
at io.cucumber.core.resource.PathScanner.findResourcesForPath(PathScanner.java:42)
at io.cucumber.core.resource.PathScanner.findResourcesFo ...

This is my directory tree:

Text

This is my build.gradle:

apply plugin: 'com.android.application'

android {
    compileSdkVersion 29
    buildToolsVersion "29.0.2"
    defaultConfig {
        applicationId "com.example.tetris"
        minSdkVersion 15
        targetSdkVersion 29
        versionCode 1
        versionName "1.2"
        testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
    }
    compileOptions {
        sourceCompatibility JavaVersion.VERSION_1_8
        targetCompatibility JavaVersion.VERSION_1_8
    }
    buildTypes {
        release {
            minifyEnabled false
            proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
        }
    }
}
configurations {
    cucumberRuntime {
        extendsFrom testImplementation
    }
}

dependencies {
    implementation fileTree(dir: 'libs', include: ['*.jar'])
    implementation 'androidx.appcompat:appcompat:1.1.0'
    implementation 'androidx.constraintlayout:constraintlayout:1.1.3'
    implementation 'com.google.android.material:material:1.1.0'
    testImplementation 'junit:junit:4.13'

    androidTestImplementation 'com.android.support.test:runner:1.0.2'
    androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2'
    androidTestImplementation group: 'io.cucumber', name: 'cucumber-java', version: '5.2.0'
    androidTestImplementation group: 'io.cucumber', name: 'cucumber-junit', version: '5.5.0'
    androidTestImplementation group: 'io.cucumber', name: 'cucumber-android', version: '4.2.5'
    androidTestImplementation group: 'io.cucumber', name: 'cucumber-picocontainer', version: '5.5.0'
    implementation 'pl.droidsonroids.gif:android-gif-drawable:1.2.7'
}

This is my Cucumber Runner:

package com.example.tetris.cucumber;

import io.cucumber.junit.Cucumber;
import io.cucumber.junit.CucumberOptions;
import org.junit.runner.RunWith;

@RunWith(Cucumber.class)
@CucumberOptions( features = "src/androidTest/java/com/example/tetris/cucumber" )
public class CucumberRunner {
}

This is my HelloWorld Feature:

Feature: HelloWorld

  Scenario: HelloToTheWorld
    Given Nothing
    When NothingAgain
    Then Hello

And this is where I wanted to write the HelloWorld steps:

package com.example.tetris.cucumber;

import org.junit.Assert;

import io.cucumber.java.en.Given;
import io.cucumber.java.en.Then;
import io.cucumber.java.en.When;


public class CucumberHello {
    @Given("^Nothing$")
    public void nothing() {
    }

    @When("^NothingAgain$")
    public void nothingagain() {
    }

    @Then("^Hello$")
    public void hello() {
        Assert.assertTrue(true);
    }
}

Just to let you know, if I delete the @CucumberOptions, the error changes to No Test Were Found Could someone please help me?

Note: Ignore the red colors in files, they are about Git and not Compilation Errors.

Have a nice day everyone.

Code coverage for Conditional Statement JEST

I am quite new to JEST and I am stuck with code coverage being 100 percent.

This is the if statement for which I am unable to get 100% coverage.

service.ts

public indexEqualtoSize(endId: number,filesize: number)
{
if (endInd==filesize)
{
return true;
}
return false;
}

this is being called in the service.ts at this line-

if (this.indexEqualtoSize(endInd,index.file.size))
{
startInd= endInd;
}

Can anyone please help me how can I write Test Case for this in JEST?

How to measure a unittest vs. intergrationtest?

I work on my thesis and it's about softwaretesting. The programming language is Java. I have a huge program with over 450.000 lines of code (without comments and blankspaces). There are also many JUnit-tests.

My question right now is: How can I get to know if a test is a unittest or an intergrationtest?

My ideas: Can I use the execution time of the tests? Or can I measure the performance of the CPU?

Do you have any tips? Do you have more experience in softwaretesting? I am not new to this, but this case is a bit new and huge for me...

Thank you in advance! :)

vendredi 27 mars 2020

Go DFA implementation best practice

Suppose I need to write a simple DFA in Go (dfa.go below is counting whether number of occurrence of "A" is odd)

However, when writing tests (dfa_test.go), I cannot reach 100% coverage (using go test -cover) because line 33 cannot be covered. Removing this line definitely solves the problem, but I still want the code to panic when the DFA is incorrectly implemented (e.g. If I am changing it to counting number of A modulo 3, it is easy to make a mistake)

So what is a good programming practice when writing DFAs in Go?

dfa.go:

package dfa

func DFA(input string) int {
    /*
        Transitions:
            (0, 'A') -> 1
            (0, 'B') -> 0
            (1, 'A') -> 0
            (1, 'B') -> 1
    */
    state := 0 // start state
    for _, i := range input {
        switch {
        case state == 0:
            switch i {
            case 'A':
                state = 1
            case 'B':
                state = 0
            default:
                panic("Invalid input")
            }
        case state == 1:
            switch i {
            case 'A':
                state = 0
            case 'B':
                state = 1
            default:
                panic("Invalid input")
            }
        default:
            panic("Invalid state")      // line 33
        }
    }
    return state
}

dfa_test.go:

package dfa

import (
    "testing"
)

func TestDFA(t *testing.T) {
    if DFA("AABBAABBABA") != 0 {
        t.Errorf("error")
    }
    func() {
        defer func() {
            if recover() == nil {
                t.Errorf("error")
            }
        }()
        DFA("AC")
    }()
    func() {
        defer func() {
            if recover() == nil {
                t.Errorf("error")
            }
        }()
        DFA("C")
    }()
}

What is the best software to test the performance of a button?

I have a new task to test a button, that is causing a delay. I played a little with Jmeter, but just to see the performance of the links per general. Can I test the performance of web components with Jmeter, or should I look for other software?

How do I test a python function which writes a json to disk?

I would like to know what is the best way write a test function (to be run using pytest) for the following short function which serialises a json.

import json
import os

def my_function(folder):
    my_dict = {"a": "A", "b": "B", "c": "C"}
    with open(os.path.join(folder, 'my_json.json'), 'w') as f:
        json.dump(my_dict, f)

I would like the test to be written as a simple function (not as a method of a class inheriting from unittest.TestCase).

My current idea is

def test_my_function():
    my_function(folder)
    with open(os.path.join(folder, 'my_json.json'), 'r') as f:
        my_dict = json.load(f)
    assert my_dict == {"a": "A", "b": "B", "c": "C"}

I am wondering if there a more elegant way of testing this without touching the disk?

Cannot publish json message on Kafka topic using ZeroCode

I am trying to create a test framework using ZeroCode for Kafka. The product I am trying to test is based on micro-services and Kafka. All I am trying to do is to connect to my topic and publish a message to it, at the moment. But when I run the test case I get an error saying 'Exception during operation:produce

I am using .properties file to give broker and SSL credentials. Then send a test JSON. If publishing is successful then I plan to consume from a certain topic and assert on the values - thereby performing an integration test on the service.

Please help me resolving this as I cannot find any meaningful information online as to how to fix this. Much appreciated!

My .properties file look something like this:

security.properties=SSL
ssl.keystore.password=<myPassword>
ssl.keystore.location=<myLocation>
kafka.bootstrap.servers=<myServer>

My JSON file (Test Scenario, null key is a valid input to my topic) looks something like this:

{
    "scenarioName": "Produce a message to kafka topic - vanilla",
    "steps": [
        {
            "name": "produce_step",
            "url": "kafka-topic:my.topic",
            "operation": "produce",
            "request": {
                "records":[
                    {
                        "value": "My test value"
                    }
                ]
            },
            "assertions": {
                "status" : "Ok",
             }
        }
    ]
}

How to become a tester in USA? [closed]

I am in the US Army for another 3 years and I am learning C programming language during my free time. Learning C language is my first interaction with programming in general. After leaving the Army, I would like to start working as tester. Also after the Army there is an opportunity to study the web related subjects at local community school. Please if anyone could advise what could be the best scenario for studying ,choosing the testing field , choosing the school or courses after the Army or even now during this period ( possible online courses), and what is the best way to find the job and what to expect at the hiring interview Ps. I am 32 I appreciate your time and help with this matter !

python; monkeypatch in pytest setup_module

My question is similar to but is not a duplicate of: How to use monkeypatch in a "setup" method for unit tests using pytest?. That OP is not using pytest setup_module

So pytest provides setup_module and teardown_module (doc: https://docs.pytest.org/en/latest/xunit_setup.html#module-level-setup-teardown). However, they don't seem to take pytest fixtures.

I have to monkeypatch an object for multiple tests; it has to be patched, "started", then used in a bunch of tests, then stopped. It is not really a use case for fixtures since it's a multithreaded application and we are testing against the running application.

Right now, I am doing a series of tests that cannot be run in parallel because all are dependent on the first test, in effort to hackaround setup_module:

def test_1_must_come_first(monkeypatch, somefixture...):
    # patch my thing
    monkeypatch.setattr("mything.init_func", somefixture)
    mything.start()

def test_2()
    # use mything

def test_3()
    # use mything

...

def teardown_module():
    mything.stop()    

What I would like instead is to move the patching in the first step so that the tests themselves can be run in parallel and not dependent:

def setup_module(monkeypatch, somefixture):
    monkeypatch.setattr("mything.init_func", somefixture)
    mything.start()

def test_1():
    # use mything

def test_2()
    # use mything

# NO LONGER AN ORDERING DEPENDENCY AMONG TESTS
...

def teardown_module():
    mything.stop()

Is there a way to achieve this?

Is it bad practice to use the DTOs of the system under test in the actual test?

I am creating some "black-box" functional tests for a Java/Spring web application that has many DTOs, some are quite complex. When it comes to create the JSON body for the test HTTP requests, it would save me a lot of time if I were to use the existing DTOs, however doesn't it defeat the "black-box" part? If I'm not supposed to use them, what is the simplest approach?

Thank you for your attention.

Library testing strategy with communications

thank you for reading!

I have developed Java Integration SDK for communication with a REST API endpoint. I would like to get advise on testing strategy and execution.

Should the integration sdk and rest server be deployed in dockers and then client executing test calls, or test should to to running dev server? Or I should make a custom endpoint?

Are there any standard REST integration testing protocols?

Any advice on how to simulate timeout, latency, not reachable endpoint and retries?

Happy Friday guys!

Testing PUT method in Django Rest Framework

I'm trying to test a PUT method in django rest framework. I get HttpResponsePermanentRedirect instead of response object. My views for a put method are set to send status 200 after successful update.
Error:
self.assertEqual(response.data, serializer.data) AttributeError: 'HttpResponsePermanentRedirect' object has no attribute 'data'

tests.py

class PostTestGetAndPutMethod(APITestCase):
    def setup(self):
        Post.objects.create(title="POST CREATED", content="POST WAS CREATED")
        Post.objects.create(title="POST CREATED 2", content="POST WAS CREATED 2")
        Post.objects.create(title="POST CREATED 3", content="POST WAS CREATED 3")

    def test_get_posts(self):
        '''
        Ensure we can get list of posts
        '''
        # get API response 
        response = self.client.get(reverse('posts'))
        # get data from DB
        posts = Post.objects.all()
        # convert it to JSON
        serializer = PostSerializer(posts, many=True)
        # check the status 
        self.assertEqual(response.status_code, status.HTTP_200_OK)
        self.assertEqual(response.data, serializer.data)

    def test_update_put_post(self):
        '''
        Check if we can update post 
        '''
        data = {'title': 'POST MODIFIED', 'content': 'CONTENT MODIFIED'}
        response = self.client.put('/posts/1', data)
        serializer = PostSerializer(data)
        self.assertEqual(response.data, serializer.data)
        self.assertEqual(response.status_code, status.HTTP_200_OK)

views.py

@api_view(['GET', 'PUT', 'DELETE'])
def post_detail(request, pk):
    """
    Retrieve, update or delete a code snippet.
    """
    try:
        post = Post.objects.get(pk=pk)
    except Post.DoesNotExist:
        return Response(status=status.HTTP_404_NOT_FOUND)

    if request.method == 'GET':
        serializer = PostSerializer(post)
        return Response(data=serializer.data, status=status.HTTP_200_OK)

    elif request.method == 'PUT':
        serializer = PostSerializer(post, data=data)
        if serializer.is_valid():
            serializer.save()
            return Response(serializer.data, status=status.HTTP_200_OK)
        return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)

    elif request.method == 'DELETE':
        post.delete()
        return Response(status=status.HTTP_204_NO_CONTENT)

TestNG eclipse plugin

I am using TestNG eclipse plugin to execute test cases. I need to group @Test methods and apply testng.xml file together. But plugin allows only one as shown radio button in screenshot. Is there any way to apply Group and Suits together?

Whole purpose to do is to apply groups and allow-return-values="true" in test methods.

enter image description here

Laravel: Assert Email verification is sent with mail:fake() in a test

I'm making an SPA and using Laravel for the API. This requires making my own authentication structure. I've borrowed from Laravel's auth and I'm making it work for my REST API.

I have a registration test and I've added assertions to test if the email verification email gets sent to the user that just registered.

The docs explain you can fake email sends and assert that specific ones get sent: https://laravel.com/docs/6.x/mocking#mail-fake

I'm having trouble finding how I can assert that the email verification specifically was sent.

    /**
     * @test
     */
    public function a_user_can_register_and_receives_verification_email()
    {


        Mail::fake();

        $response = $this->json( 'POST', '/api/register', [
            'first_name' => 'User',
            'last_name' => 'Example',
            'email' => 'user@example.com',
            'password' => 'password'
        ])->assertStatus(201);

        $user = $response->getData();

        $this->assertDatabaseHas('users', [
            'email' => $user->email
        ]);

        // Assert a message was sent to the given users...
        Mail::assertSent(MailMessage::class, function ($mail) use ($user) {
            return $mail->hasTo($user->email);
        });


    }

I was trying to hunt down what sendEmailVerificationNotification() does, and it let me to Illuminate\Auth\Notifications\Verify

It is a little confusing to me, but seems to be sending a MailMessage, which is what I'm asserting. However, my test fails: The expected [Illuminate\Notifications\Messages\MailMessage] mailable was not sent

This is my register method on my controller:

    public function register(Request $request) {

        $request->validate([
            'first_name' => ['required', 'string', 'max:255'],
            'last_name' => ['required', 'string', 'max:255'],
            'email' => ['required', 'string', 'email', 'max:255', 'unique:users'],
            'password' => ['required', 'string', 'min:8'],
        ]);

        $user = User::create([
            'first_name' => $request->first_name,
            'last_name' => $request->last_name,
            'email' => $request->email,
            'password' => Hash::make($request->password)
        ]);

        event(new Registered($user));

        return $user;

    }

api routes

Route::post('register', 'Auth\RegisterController@register');
Route::get('email/resend', 'Auth\VerificationController@resend')
    ->name('verification.resend');

Route::get('email/verify/{id}/{hash}', 'Auth\VerificationController@verify')
    ->name('verification.verify');

If I hit my register route via Postman, the email does get sent with the appropriate link. I just cant pass the test to verify.

JavaScript xPath command does not run co-mingled with Selenium Java program on every run

The below javascript (co-mingled with Java in Selenium) does not run xpath command successfully on every run. However my Java command runs successfully, it's the xpath that I'm having issues with. (In other words, sometimes xpath command runs successfully and at other times it does not). I changes jdk from 13 to jdk8, and that didn't work. I don't know what's needed.

I'm new with learning automation testing and I'm teaching myself.

Here's the command line:

driver.findElement(By.xpath("//button[@type='button' and @data-test-id='checkbox']")).click();

Error response:

JavaScript error: , line 0: NotAllowedError: The play method is not
allowed by the user agent or the platform in the current context,
possibly because the user denied permission.
Exception in thread "main"
org.openqa.selenium.StaleElementReferenceException: The element
reference of <button class="c27KHO0_n b_0 M_0 i_0 I_T y_Z2uhb3X
A_6EqO r_P C_q cvhIH6_T ir3_1JO2M7 P_0" type="button"> is stale;
either the element is no longer attached to the DOM, it is not
in the current frame context, or the document has been refreshed