jeudi 31 octobre 2019

Write Junit test cases for core Java application

I have BMI calculator java applications and it can Calculates BMI of user to know his/her current body weight, This program will use both Metric (kg/m) and Imperial (lb/in) BMI formula, and After getting the results, user will know his/her BMI category (underweight, obese, overweight or normal).

I need to write Junit test cases for this application, the source code of it is shown below. Can anyone help me to write Junit test cases for this class, please

public class MainSrc {

final static double KG_TO_LBS = 2.20462;
final static double M_TO_IN = 39.3701;
private static DecimalFormat TWO_DECIMAL_PLACES = new DecimalFormat(".##");

public static void main(String[] args) {
    double height, weight;

    System.out.println("The BMI Calculator by iasjem");     
    System.out.println();

    Scanner askUser= new Scanner(System.in);

    System.out.println("What is your weight (kg)?");
    weight = askUser.nextDouble();

    System.out.println("What is your height (m)?");
    height = askUser.nextDouble();

    System.out.println();
    System.out.println("Your body weight is:");
    System.out.println("In Metric units: " + TWO_DECIMAL_PLACES.format(metricFormula(height, weight)) + " kg");
    System.out.println("In Imperial units: " + TWO_DECIMAL_PLACES.format(imperialFormula(height, weight)) + " lbs");
    System.out.println("As a result, you are " + getCategory(metricFormula(height, weight)));

    askUser.close();
    System.exit(0);
}

// THE METRIC FORMULA
public static double metricFormula(double m, double kg) {
    double result=0;

    result = kg / (Math.pow(m, 2));

    return result;
}

// THE IMPERIAL FORMULA
public static double imperialFormula(double m, double kg) {
    double result=0;
    // convert kg to lbs and m to in
    double lbs = Math.round(kg * KG_TO_LBS);
    double in = Math.round((m * M_TO_IN) *100);

    result = (lbs / (Math.pow((in/100), 2)))* 703;

    return result;
}

// THE BMI CATEGORIES
public static String getCategory (double result) {
    String category;
    if (result < 15) {
        category = "very severely underweight";
    } else if (result >=15 && result <= 16) {
        category = "severely underweight";
    } else if (result >16 && result <= 18.5) {
        category = "underweight";
    } else if (result >18.5 && result <= 25) {
        category = "normal (healthy weight)";
    } else if (result >25 && result <= 30) {
        category = "overweight";
    } else if (result >30 && result <= 35) {
        category = "moderately obese";
    } else if (result >35 && result <= 40) {
        category = "severely obese";
    } else {
        category ="very severely obese";
    }
    return category;
}

}

Debug lack of test caching in Go

I am running a Go test that I think should be cached when I run it more than once. However, Go is running the test every time.

Is there a flag or environment variable I can use to help determine why Go is deciding not to cache this particular test?

Kotlin js test don't export main module

I try to run tests with last kotlin js plugin. A "build.gradle" was created by IDEA master:

plugins {
    id 'org.jetbrains.kotlin.js' version '1.3.50'
}

group 'ru.altmaea'
version '1.0-SNAPSHOT'

repositories {
    mavenCentral()
}

dependencies {
    implementation "org.jetbrains.kotlin:kotlin-stdlib-js"
    testImplementation "org.jetbrains.kotlin:kotlin-test-js"
}

kotlin.target.browser { }

When test doesn't use main module it works and throws correct fail:

import kotlin.test.Test
import kotlin.test.assertEquals

class IndexTest{

    @Test
    fun testOne() {
        assertEquals("11", "22")
    }

}

When i added to test a var from main module and run a "test" task, no tests were made and no error messages:

    @Test
    fun testTwo() {
        assertEquals("11", x)
    }

where x is defined in the main module.

MCVE.

how is it possible to perform a "testTwo"?

In a test my form looks right but don't validate

I want to test a form. It is working, but the test doesn't.

One field of this form is popolated by a javascript function. I can use selenium to do so, but I don't want because it's giving problems and also I want isolate the test.

So I'm calling the form in my test, then I'm creating the choices (this is what javascript should do), then I'm setting the fields values.

My models.py:

class Name(models.Model):
    name = models.CharField(_('nome'), max_length=50, default='')
    namelanguage = models.ForeignKey(
        NameLanguage, related_name='%(app_label)s_%(class)s_language',
        verbose_name=_('linguaggio'), on_delete=models.PROTECT)
    nametype = models.ForeignKey(
        NameType, related_name='%(app_label)s_%(class)s_tipo',
        verbose_name=_('tipo'), on_delete=models.PROTECT)
    gender = models.ForeignKey(
       Gender, related_name='%(app_label)s_%(class)s_gender',
       verbose_name=_('sesso'), on_delete=models.PROTECT,
       blank=True, null=True)
    usato = models.PositiveSmallIntegerField(_('usato'), default=0)
    approved = models.BooleanField(null=True, blank=True, default=False)

    def save(self, *args, **kwargs):
        self.name = format_for_save_name(self.name)
        to_save = check_gender_name(self)
        if not to_save:
            return
        else:
            super(Name, self).save(*args, **kwargs)

    def format_for_save_name(name):
        myname = name.lower().strip()
        if myname[0] not in "abcdefghijklmnopqrstuvwxyz#":
            myname = '#' + myname
        return myname

My form.py:

class NameForm(forms.ModelForm):
    class Meta:
        model = Name
        fields = ['namelanguage', 'nametype', 'gender', 'name', 'usato',
              'approved']
        widgets = {
            'gender': forms.RadioSelect(),
            'usato': forms.HiddenInput(),
            'approved': forms.HiddenInput(),
        }

My test_form.py:

def test_form_validation(self):
    maschio = Gender.objects.create(name_en='Male', name_it='Maschio')
    nome = NameType.objects.create(name_en='Name', name_it='Nome')
    romani = NameLanguage.objects.create(
        name_en='Romans', name_it='Romani')
    romani.sintassi.add(nome)
    form = NameForm()
    form.fields['nametype'].disabled = False
    form.fields['nametype'].choices = [(nome.id, nome)]
    form.fields['nametype'].initial = nome.id
    form.fields['gender'].initial = maschio.id
    form.fields['name'].initial = 'Bill'
    form.fields['namelanguage'].initial = romani.id
    # form.fields['usato'].initial = 0
    # form.fields['approved'].initial = False
    print('1', form)
    # self.assertTrue(form.is_valid())
    form.save()

Print() gives a form without errors but form.is_valid is False and (when is commented out) form.save() gives an error when the model try to save the name field:

if myname[0] not in "abcdefghijklmnopqrstuvwxyz#":
IndexError: string index out of range

That is because the name is an empty string and yet my print('1', form) gives all the fields with the right options selected and specifically the name field isn't empty but has value="Bill":

<td><input type="text" name="name" value="Bill" maxlength="50" autofocus="" required id="id_name">

Why is there an undefined reference to error when compiling a Catch2 test? How should I fix it?

I'm trying to write a test script for Catch2 but am having some major headscratching errors.

I've looked online for similar cases to mine, but it seems like they also used CMakeList.txt files whereas I only downloaded the catch.hpp from the Catch2 README.

In my folder, I have getDifferences.cpp, getDifferences.hpp, and test.cpp where where I put in

#define CATCH_CONFIG_MAIN, #include "catch.hpp", #include "getDifferences.hpp" on top of its header, and have no other main() defined.

My laptop: OS: Dualboot ChromeOS / XFCE4 Compiler+version g++ (Ubuntu 5.4.0) Catch version: 2.10.0 latest version

Next, I tried compiling test.cpp with:

g++ -std=gnu++11 test.cpp

and it results in the error below:

/tmp/ccd1AxMb.o: In function `____C_A_T_C_H____T_E_S_T____0()':
test.cpp:(.text+0x27be4): undefined reference to `getDifferences(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)'
test.cpp:(.text+0x27daf): undefined reference to `getDifferences(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)'
test.cpp:(.text+0x27f71): undefined reference to `getDifferences(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)'
/tmp/ccd1AxMb.o: In function `____C_A_T_C_H____T_E_S_T____2()':
test.cpp:(.text+0x284b7): undefined reference to `getDifferences(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)'
test.cpp:(.text+0x28740): undefined reference to `getDifferences(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)'
/tmp/ccd1AxMb.o:test.cpp:(.text+0x28ca8): more undefined references to `getDifferences(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)' follow
collect2: error: ld returned 1 exit status

I'm not sure why I receive an undefined reference to getDifferences, a function that I've defined and included in the header files. Any pointers would be so helpful. Thank you in advance!

The method startFlow(FlowLogic extends T?> , InvocationContext) in the type FlowStarter is not applicable for the arguments

to test my CorDapp, I used the instructions of the API: Testing documentation and the FlowTest.java file from the cordapp-template-java project. You will find my code below. I struggle with the StartedNodeServices.startFlow() function. The documentation states that it should take in a FlowLogic<T>, which would be the instance of the class InitiatorFlow in my example. However, the testing documentation shows two inputs. The code below results in the following error:

The method startFlow(FlowLogic<? extends T>, InvocationContext) in the type FlowStarter is not applicable for the arguments (ServiceHub, InitiatorFlow)

I am not sure how to deal with this since the first input that is shown in the testing documentation is no FlowLogic. If I switch the arguments, the same error occurs.

Maybe you can give me a hint on how to deal with this. Thank you for your help!

package com.template;

import com.google.common.collect.ImmutableList;
import com.template.flows.InitiatorFlow;
import com.template.flows.Responder;
import com.template.states.MyState;

import net.corda.core.concurrent.CordaFuture;
import net.corda.core.context.InvocationContext;
import net.corda.core.flows.InitiatedBy;
import net.corda.core.identity.AbstractParty;
import net.corda.core.identity.CordaX500Name;
import net.corda.core.transactions.SignedTransaction;
import net.corda.testing.node.MockNetwork;
import net.corda.testing.node.MockNetworkParameters;
import net.corda.testing.node.StartedMockNode;
import net.corda.testing.node.TestCordapp;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import net.corda.node.services.api.StartedNodeServices;
import net.corda.node.services.statemachine.ExternalEvent.ExternalStartFlowEvent;
import net.corda.node.services.api.StartedNodeServices.*;



public class FlowTests {
    private final MockNetwork network = new MockNetwork(new MockNetworkParameters(ImmutableList.of(
        TestCordapp.findCordapp("com.template.contracts"),
        TestCordapp.findCordapp("com.template.flows")
    )));
    private final StartedMockNode alice = network.createPartyNode(new CordaX500Name("Alice", "London", "GB"));
    private final StartedMockNode bob = network.createPartyNode(new CordaX500Name("Bob", "Paris", "FR"));



    public FlowTests() {
        alice.registerInitiatedFlow(InitiatorFlow.class);
        bob.registerInitiatedFlow(InitiatorFlow.class);
    }

    @Before
    public void setup() {

        network.runNetwork();
    }

    @After
    public void tearDown() {
        network.stopNodes();
    }


    @Test
    public void dummyTest() {

        CordaFuture<SignedTransaction> future = StartedNodeServices.startFlow(alice.getServices(), new InitiatorFlow(5, 100,  bob.getInfo().getLegalIdentities().get(0)));
        network.runNetwork();
        SignedTransaction signedTransaction = future.get();
    }
}

Test CustomValidator which depends on JPA repository

I have this validator...

public class UniqueFurlValidator implements ConstraintValidator<Unique, String> {
    @Autowired
    private SponsorService sponsorService;

    @Override
    public void initialize(Unique unique) {
        unique.message();
    }

    @Override
    public boolean isValid(String furl, ConstraintValidatorContext context) {
        if (sponsorService != null && sponsorService.existsByFurl(furl)) {
            return false;
        }
        return true;
    }
}

It is used in a form as @Unique and the controller uses it as @Valid. How can I test this? Because the code depends on the repository and if the furl already exists then it shows an error message on the web page saying that it is not unique.

Does requirement testing tool exist?

I have an exercise about requirements testing tools but not the requirements management tool. I searched a lot on Google but couldn't find anything like that.

Xcode 11 unit tests always fails

Unit tests always fails in Xcode 11 in my project. I try to run a test where I only do XCTAssertTrue(true). In the test log I get this message:

https://gyazo.com/3b3ac107b66d11addfaee12cd1a5d961

I have tried to clean build folder, setting the build system to legacy, restarting xcode, deleting all other tests and running just the test class without any test methods. All fails and produce this error.

Test rest api with parameter

I am testing my REST API...

@RestController
@RequestMapping(path = "/api")
public class SponsorAPI {
    private SponsorService sponsorService;

    @Autowired
    public SponsorAPI(SponsorService sponsorService) {
        this.sponsorService = sponsorService;
    }

    @GetMapping(path = "/findTopFiveSponsors")
    public ResponseEntity<?> get(@Param("charityId") Long charityId) {
        return ResponseEntity.ok(sponsorService.findTopFiveSponsors(charityId));
    }
}

The test is...

@RunWith(SpringRunner.class)
@WebMvcTest(SponsorAPI.class)
public class SponsorAPITest {
    @Autowired
    private MockMvc mockMvc;

    @MockBean
    SponsorService sponsorService;

    @MockBean
    CharityService charityService;

    @Test
    public void shouldReturnTheTopFiveSponsors() throws Exception {
        Charity nspcc = new Charity(1L,
                "12345678",
                "National Society for the Prevention of Cruelty to Children",
                "Kids charity",
                "nspcc",
                "NSPCC",
                true);

        given(charityService.findById(1L)).willReturn(Optional.of(nspcc));

        Sponsor sponsor1 = new Sponsor(2L,
                "Foo Bar",
                nspcc,
                "Running a marathon",
                "To help raise funds",
                LocalDateTime.now(),
                LocalDateTime.now(),
                LocalDateTime.now().plusMonths(6),
                "foo-bar");

        // List<Sponsor> sponsors = new ArrayList<>();
        // sponsors.add(sponsor1);

        // given(sponsorService.findById(2L)).willReturn(Optional.of(sponsor1));

        mockMvc.perform(get("/api/findTopFiveSponsors").contentType(MediaType.APPLICATION_JSON))
                .andDo(print())
                .andExpect(status().isOk())
                .andExpect(jsonPath("$.fundraiserName", is("Foo Bar")));
    }
}

However, I am keep getting...

java.lang.AssertionError: No value at JSON path "$.fundraiserName"

    at org.springframework.test.util.JsonPathExpectationsHelper.evaluateJsonPath(JsonPathExpectationsHelper.java:295)
    at org.springframework.test.util.JsonPathExpectationsHelper.assertValue(JsonPathExpectationsHelper.java:72)
    at org.springframework.test.web.servlet.result.JsonPathResultMatchers.lambda$value$0(JsonPathResultMatchers.java:87)
    at org.springframework.test.web.servlet.MockMvc$1.andExpect(MockMvc.java:195)
    at com.nsa.charitystarter.controllers.api.SponsorAPITest.shouldReturnTheTopFiveSponsors(SponsorAPITest.java:70)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
    at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
    at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at org.springframework.test.context.junit4.statements.RunBeforeTestExecutionCallbacks.evaluate(RunBeforeTestExecutionCallbacks.java:74)
    at org.springframework.test.context.junit4.statements.RunAfterTestExecutionCallbacks.evaluate(RunAfterTestExecutionCallbacks.java:84)
    at org.springframework.test.context.junit4.statements.RunBeforeTestMethodCallbacks.evaluate(RunBeforeTestMethodCallbacks.java:75)
    at org.springframework.test.context.junit4.statements.RunAfterTestMethodCallbacks.evaluate(RunAfterTestMethodCallbacks.java:86)
    at org.springframework.test.context.junit4.statements.SpringRepeat.evaluate(SpringRepeat.java:84)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
    at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:251)
    at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:97)
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
    at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61)
    at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:70)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
    at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:190)
    at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
    at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
    at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
    at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
    at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
Caused by: com.jayway.jsonpath.PathNotFoundException: Expected to find an object with property ['fundraiserName'] in path $ but found 'net.minidev.json.JSONArray'. This is not a json object according to the JsonProvider: 'com.jayway.jsonpath.spi.json.JsonSmartJsonProvider'.
    at com.jayway.jsonpath.internal.path.PropertyPathToken.evaluate(PropertyPathToken.java:71)
...

This is not a duplicate question as I have tried the other solutions like $.[0].fundraiserName but still no success. Any other code that is needed to solve the problem? Any advice to get this to work? Is there a way to see the contents of the JSON and check the output or do I need some kind of JSON mapper? Or is the problem to do with the charity ID?

Also I was running code test coverage with Jacoco, and it says I haven't covered this line... return ResponseEntity.ok(sponsorService.findTopFiveSponsors(charityId)); Can my current test incorporate this?

VSCode - Debugging / Testing - TextEditor

I implemented a vscode extension and now I want to test it. My extension consists of 3 functions, I was able test 2 of them using vscode-test's runTest and mocha and this launch config:

{
    "name": "Extension Test",
    "type": "extensionHost",
    "request": "launch",
    "runtimeExecutable": "${execPath}",
    "args": [
        "--extensionDevelopmentPath=${workspaceFolder}",
        "--extensionTestsPath=${workspaceFolder}/out/extension/tests/suite/index"
    ],
    "outFiles": [
        "${workspaceFolder}/out/extension/tests/**/*.js"
    ]
}

But I am stuck now, because the third function needs a vscode.TextEditor and I can't find a way to create or get one. Launching this doesn't give the test files any TextEditors, I tried to get one using:

let editor1 = vscode.window.visibleTextEditors; // --> length = 0
let editor2 = vscode.window.activeTextEditor; // --> undefined

Is there a way to test this?

Test Nested Tag attribute inside Parent Tag's attribute

I have a tag that is nested inside another tag's attribute.

Exhibit a.

<Tag
    input={(
      <NestedTag
        value={this.props.value}
        onChange={this.validate}
      />
    )}
  />

What I want to do is check that value inside NestedTag is correct.

In Enzyme, I've got as far as this:

expect(
  wrap
    .find(Tag)
    .at(0)
    .prop('input')
).toBe(...tag info goes here);

This is as far as I've gotten so far. I just don't know how to look inside "input" and poke around inside NestedTag.

I know I can use toMatchObject, but I've prefer to check each individual property in isolation.

JUnit Mockito ReflectionEquals on List of Objects

I need to test if a returned list of objects is the same as the expected value. Since I do not want to override the equals method just for the test im looking for a solution to compare the classes by values. I found the ReflectionEquals class, but this is not working on a list. Is there any other approach to test this?

@Test
public void testUseOnFieldWithRuleApplied() {
    List<Field> argumentFields = new ArrayList<>(Arrays.asList(new Field(1), new Field(2)));

    List<Field> expectedFields = new ArrayList<>(Arrays.asList(new Field(0), new Field(1)));
    List<Field> returnedFields = this.toBeTestedRule.useOnFields(argumentFields);

    assertTrue(new ReflectionEquals(expectedFields).matches(returnedFields));    
}

Cross browser testing in selenium with python

from selenium import webdriver as wd
import pytest
import time
Chrome=wd.Chrome(executable_path=r"C:\Chrome\chromedriver.exe")
Firefox=wd.Firefox(executable_path=r"C:\geckodriver\geckodriver.exe")
class TestLogin():
    @pytest.fixture()
    def setup1(self):
        browsers=[Chrome, Firefox]
        for i in browsers:
            self.driver= i
            i.get("https://www.python.org")
            time.sleep(3)

        yield
        time.sleep(3)
        self.driver.close()

    def test_Python_website(self,setup1):
        self.driver.find_element_by_id("downloads").click()
        time.sleep(3)

I am trying to run this code to perform some action in Chrome and Firefox browser. But when I run the program chrome browser starts and Test cases are not working in Chrome browser. then Firefox opens and test case works fine in Firefox. I tried for loop and couple of things didnt work. kindly someone help.

Rails tests fail when run as a suite but pass when run individually and also pass on CI

I'm using minitest in Rails 5.2 with Factorybot.

When I run tests individually (eg models/enhanced_object_test.rb) they pass but when I run the whole suite, 'seemingly' unrelated tests start to fail on 'seemingly' unrelated methods.

There are a number of similar questions on SO but they haven't helped. I've tried using DatabaseCleaner between each test but I have the same problem.

I suspect my problem relates to the way I am using factories but I am lost about how to debug.

The error I am getting always relates to the method import_colms_images_to_s3 eg:

error relating to tests/models/info_test_page.rb

test_metadata                                                   FAIL (0.43s)
        not all expectations were satisfied
        unsatisfied expectations:
        - expected exactly once, invoked twice: #<EnhancedObject:0x55652b343e20>.import_colms_images_to_s3(any_parameters)
        - expected exactly once, invoked twice: #<EnhancedObject:0x556527763e28>.import_colms_images_to_s3(any_parameters)
        - expected exactly once, invoked twice: #<EnhancedObject:0x5565289bb170>.import_colms_images_to_s3(any_parameters)

The only place I have an expectation relating to import_colms_images_to_s3 is in 3 tests with the enhanced_object model test eg:

tests/models/enhanced_object_test.rb

## Stubbed the import_colms_images_to_s3 method to avoid the db call to museum_object

test "create enhanced object success" do
    eo = EnhancedObject.new
    eo.name = "test"
    eo.short_description = "shrt dscrptn"
    eo.long_description = "Loooong description"
    eo.museum_unique_id = "O70135"
    eo.colms_images = ["2006ar9715"]
    eo.expects(:import_colms_images_to_s3).returns(true)
    assert eo.save, "Saved enhanced object"
  end

the test for the info_test_page model is:

require 'test_helper'

class InfoPageTest < ActiveSupport::TestCase

  setup do
    @infopage = create(:info_page)
  end

 test "metadata" do
    assert @infopage.metadata.present?
    assert_equal(@infopage.title, @infopage.metadata[:title])
  end
end

This test uses an info_page factory but again this has no connection to enhanced_objects and the failing method.

test/factories/info_pages.rb

FactoryBot.define do
  factory :info_page do
    sequence(:title) { |n| "Info page #{n}" }
    sequence(:slug) { |n| "info-page-#{n}" }
    body { '{"data":[{"type":"heading","data":{"text":"Test info page",'\
         '"format":"html"}},{"type":"text","data":{"text":'\
         '"<p>This is some <b>body</b> text...</p>",'\
         '"format":"html"}},{"type":"quote","data":{"text":'\
         '"<p>This is a quote</p>", "format":"html",'\
         '"cite":"By someone useful."}}]}' }
    background_image {
      fixture_file_upload(
        Rails.root.join('test', 'fixtures', 'header.jpg'),
        'image/jpg') }
    background_image_credit { 'Kermit T. Frog' }
    colour_scheme { ColourScheme::COLOUR_SCHEMES.first }
  end

  trait :without_image do
    background_image { nil }
  end
end

As far as I am concerned, info pages and enhanced objects are unrelated so I can't understand why the tests are interacting in some way.

Final clue is that when the test suite is run on Circle CI the tests pass when run as a suite.

Given no modulus or if even/odd function, how would one check for an odd or even number?

I have recently sat a computing exam in university in which we were never taught beforehand about the modulus function or any other check for odd/even function and we have no access to external documentation except our previous lecture notes. Is it possible to do this without these and how?

Test REST API with parameter

I have this rest controller...

@RestController
@RequestMapping(path = "/api")
public class SponsorAPI {
    private SponsorService sponsorService;

    @Autowired
    public SponsorAPI(SponsorService sponsorService) {
        this.sponsorService = sponsorService;
    }

    @GetMapping(path = "/findTopFiveSponsors")
    public ResponseEntity<?> get(@Param("charityId") Long charityId) {
        return ResponseEntity.ok(sponsorService.findTopFiveSponsors(charityId));
    }
}

I want to test this. The logic is that when the user is on the charity page, they are shown the top five sponsors list in a table (using the JPA SQL query along with Thymeleaf).

I have written this test so far...

@RunWith(SpringRunner.class)
@WebMvcTest(SponsorAPI.class)
public class SponsorAPITest {
    @Autowired
    private MockMvc mockMvc;

    @MockBean
    SponsorService sponsorService;

    @MockBean
    CharityService charityService;

    @Test
    public void shouldReturnTheTopFiveSponsors() throws Exception {
        Sponsor sponsor1 = new Sponsor(2L,
                "Foo Bar",
                charityService.findById(1L).get(),
                "Running a marathon",
                "To help raise funds",
                LocalDateTime.now(),
                LocalDateTime.now(),
                LocalDateTime.now().plusMonths(6),
                "foo-bar");

        Sponsor sponsor2 = new Sponsor(3L,
                "Baz Qux",
                charityService.findById(1L).get(),
                "Participating in a cycling race",
                "To help raise funds",
                LocalDateTime.now(),
                LocalDateTime.now(),
                LocalDateTime.now().plusMonths(6),
                "baz-qux");

        List<Sponsor> sponsors = new ArrayList<>();
        sponsors.add(sponsor1);

        given(sponsorService.findById(2L)).willReturn(Optional.of(sponsor1));

        mockMvc.perform(get("/findTopFiveSponsors").contentType(MediaType.APPLICATION_JSON))
                .andDo(print())
                .andExpect(status().isOk())
                .andExpect(jsonPath("$.fundraiserName", is("Foo Bar")));
    }
}

But I get this error...

No value present
java.util.NoSuchElementException: No value present
    at java.base/java.util.Optional.get(Optional.java:148)
    at com.nsa.charitystarter.controllers.api.SponsorAPITest.shouldReturnTheTopFiveSponsors(SponsorAPITest.java:43)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
    at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
    at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at org.springframework.test.context.junit4.statements.RunBeforeTestExecutionCallbacks.evaluate(RunBeforeTestExecutionCallbacks.java:74)
    at org.springframework.test.context.junit4.statements.RunAfterTestExecutionCallbacks.evaluate(RunAfterTestExecutionCallbacks.java:84)
    at org.springframework.test.context.junit4.statements.RunBeforeTestMethodCallbacks.evaluate(RunBeforeTestMethodCallbacks.java:75)
    at org.springframework.test.context.junit4.statements.RunAfterTestMethodCallbacks.evaluate(RunAfterTestMethodCallbacks.java:86)
    at org.springframework.test.context.junit4.statements.SpringRepeat.evaluate(SpringRepeat.java:84)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
    at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:251)
    at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:97)
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
    at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61)
    at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:70)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
    at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:190)
    at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
    at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:66)
    at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
    at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
    at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
    at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:117)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
    at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:155)
    at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:137)
    at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:404)
    at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
    at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
    at java.base/java.lang.Thread.run(Thread.java:834)

Any advice on alternative way of testing this or if the code above can be fixed? Is the problem that I am not passing in a charity ID? The value not found is coming for getting a charity, but I am sure that the charity is present in the database as I have checked it.

EDIT: I solved the error by doing...

@Test
    public void shouldReturnTheTopFiveSponsors() throws Exception {
        Charity nspcc = new Charity(1L,
                "12345678",
                "National Society for the Prevention of Cruelty to Children",
                "Kids charity",
                "nspcc",
                "NSPCC",
                true);

        given(charityService.findById(1L)).willReturn(Optional.of(nspcc));

But now I get...

java.lang.AssertionError: Status 
Expected :200
Actual   :404

How to create separate in memory database for each test?

I'm writing test with xunit on .NET Core 3.0 and i have problem with in memory database. I need separate database for each test but now i create single database which causes problems, but i have no idea how to create new database for each test.

    public class AccountAdminTest :
        IClassFixture<CustomWebApplicationFactory<Startup>>
        {
            private readonly HttpClient _client;
            private IServiceScopeFactory scopeFactory;
            private readonly CustomWebApplicationFactory<Startup> _factory;
            private ApplicationDbContext _context;

            public AccountAdminTest(CustomWebApplicationFactory<Startup> factory)
            {
                _factory = factory;
                _client = _factory.CreateClient(new WebApplicationFactoryClientOptions
                {
                    AllowAutoRedirect = true,
                    BaseAddress = new Uri("https://localhost:44444")
                });

                scopeFactory = _factory.Services.GetService<IServiceScopeFactory>();
                var scope = scopeFactory.CreateScope();
                _context = scope.ServiceProvider.GetService<ApplicationDbContext>();
            }
        }
    public class CustomWebApplicationFactory<TStartup>
     : WebApplicationFactory<TStartup> where TStartup : class
    {
        protected override void ConfigureWebHost(IWebHostBuilder builder)
        {
            builder.ConfigureTestServices(services =>
            {
                var descriptor = services.SingleOrDefault(
                    d => d.ServiceType ==
                        typeof(DbContextOptions<ApplicationDbContext>));

                if (descriptor != null)
                {
                    services.Remove(descriptor);
                }

                services.AddDbContext<ApplicationDbContext>((options, context) =>
                {
                    context.UseInMemoryDatabase("IdentityDatabase");
                });
            });
        }
    }

Compare two objects by properties in JUnit Tests

I have a method I call which removes, changes and adds other items to a list of a passed list of objects and returns some of these object if they have special characteristics. Now I want to write the test for that class and it drives me crazy since i can not just compare the returned objects with a list of expected objects I hardcoded into the test. I also do not want to override the equals and hashCode method just so the tests will work. Is there any clean approach of doing that in a test? Im quite new to Java, so I have only basic understanding of JUnit and stuff like Mockito :/

how don't create some beans in spring boot test?

In my application has a component,when applciation ready the component need to do read some data from other system,so when I test,I want this componet not be created。

@Component
@Slf4j
public class DeviceStatisticsSyncHandler {
    @EventListener
    public void handle(ApplicationReadyEvent event) {
        syncDeviceStatisticsDataSync();
    }

    @Value("${test.mode:false}")
    public  boolean serviceEnabled;
}

I can use condition to solve this,bu the other code reader need to understand,so I think this is not a very good method.

@EventListener(condition =  "@deviceStatisticsSyncHandler .isServiceEnabled()")
public void handle(ApplicationReadyEvent event) {
    syncDeviceStatisticsDataSync();
}

public  boolean isServiceEnabled() {
    return !serviceEnabled;
}

@Value("${test.mode:false}")
public  boolean serviceEnabled;

My application don't use Profile,is there has other method to solve this problem.

Spring Boot version:2.1.3

Spring Boot org.springframework.web.util.NestedServletException caused by missing resource

I am testing that if there is no charity with the ID specified, the page should show the 404 page which contains the string "No matching charity". I have done...

@Test
    public void shouldReturn404IfThereIsNoMatchingCharity() throws Exception {
        Charity nspcc = new Charity(1L,
                "12345678",
                "National Society for the Prevention of Cruelty to Children",
                "Kids charity",
                "nspcc",
                "NSPCC",
                true);

        given(charityService.findById(1L)).willReturn(Optional.of(nspcc));
        given(charityService.findById(2L)).willReturn(Optional.empty());

        this.mockMvc.perform(get("/charity/2"))
                .andDo(print())
                .andExpect(status().isNotFound())
                .andExpect(content()
                        .string(containsString("No matching charity")));
    }

But I get this error:

org.springframework.web.util.NestedServletException: Request processing failed; nested exception is com.nsa.charitystarter.controllers.exceptions.MissingResourceException: No matching charity

at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1013) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:897) at javax.servlet.http.HttpServlet.service(HttpServlet.java:634) at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:882) at org.springframework.test.web.servlet.TestDispatcherServlet.service(TestDispatcherServlet.java:71) at javax.servlet.http.HttpServlet.service(HttpServlet.java:741) at org.springframework.mock.web.MockFilterChain$ServletFilterProxy.doFilter(MockFilterChain.java:166) at org.springframework.mock.web.MockFilterChain.doFilter(MockFilterChain.java:133) at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118) at org.springframework.mock.web.MockFilterChain.doFilter(MockFilterChain.java:133) at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:92) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118) at org.springframework.mock.web.MockFilterChain.doFilter(MockFilterChain.java:133) at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118) at org.springframework.mock.web.MockFilterChain.doFilter(MockFilterChain.java:133) at org.springframework.test.web.servlet.MockMvc.perform(MockMvc.java:182) at com.nsa.charitystarter.controllers.CharityProfileTest.shouldReturn404IfThereIsNoMatchingCharity(CharityProfileTest.java:69) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.springframework.test.context.junit4.statements.RunBeforeTestExecutionCallbacks.evaluate(RunBeforeTestExecutionCallbacks.java:74) at org.springframework.test.context.junit4.statements.RunAfterTestExecutionCallbacks.evaluate(RunAfterTestExecutionCallbacks.java:84) at org.springframework.test.context.junit4.statements.RunBeforeTestMethodCallbacks.evaluate(RunBeforeTestMethodCallbacks.java:75) at org.springframework.test.context.junit4.statements.RunAfterTestMethodCallbacks.evaluate(RunAfterTestMethodCallbacks.java:86) at org.springframework.test.context.junit4.statements.SpringRepeat.evaluate(SpringRepeat.java:84) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:251) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:97) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61) at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:70) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:190) at org.junit.runner.JUnitCore.run(JUnitCore.java:137) at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47) at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242) at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) Caused by: com.nsa.charitystarter.controllers.exceptions.MissingResourceException: No matching charity at com.nsa.charitystarter.controllers.CharityProfile.getCharityProfile(CharityProfile.java:38)

The missing resource exception is defined as follows...

public class MissingResourceException extends RuntimeException {
    protected String returnUrl;

    public MissingResourceException(String msg, String returnUrl) {
        super(msg);
        this.returnUrl = returnUrl;
    }

    public MissingResourceException(String msg) {
        this(msg, "/");
    }

    public MissingResourceException() {
        this("Missing Resource", "/");
    }
}

It gets used as follows...

@ControllerAdvice(annotations = RestController.class)
public class RestResponseEntityExceptionHandler extends ResponseEntityExceptionHandler {
    @ExceptionHandler(MissingResourceException.class)
    protected ResponseEntity<Object> handleMissingResource(RuntimeException ex,
                                                           WebRequest request) {
        String bodyOfResponse = ex.getMessage();
        return handleExceptionInternal(ex, bodyOfResponse,
                new HttpHeaders(), HttpStatus.NOT_FOUND, request);
    }

    @ExceptionHandler(NonUniqueResourceException.class)
    protected ResponseEntity<Object> handleNonUniqueResource(RuntimeException ex,
                                                             WebRequest request) {
        String bodyOfResponse = ex.getMessage();
        return handleExceptionInternal(ex, bodyOfResponse,
                new HttpHeaders(), HttpStatus.CONFLICT, request);
    }
}

Not sure why the error is occuring. Any advice on how this can be fixed?

What's the best equation for a p-values calculation of testing item matching

I've inherited a document management system where there seem to be some documents wrongly associated with a client. We need to know the scale of the problem and how many clients we'd need to test to be fairly certain of the scale of the issue.

I remember from university doing a bit on p-values in statistical testing to determine the likelihood of mistake in your estimates, but not enough to figure out how I would approach this.

Say I have 1000 clients, each with 30 documents on average, and want to manually check the documents to see if it should have been allocated to that client, if I check 10, 50, 100 or 500, how would I estimated the p-value for each and would that p-value give me the confidence to say something like '1 in 100 clients has a bad document with an error estimate of 0.01'.

The ultimate question is one of the risk, if the error estimate is too high, we'll have to manually check each document for all 1000 clients, but if it's very low, we can perhaps accept the risk where of the 30,000 documents, say 30 are likely to be wrong.

found a wikipedia article, but I'm just not able to understand how I'd apply the numbers I've got.

List available test cases in Android and iOS

I'm developing a test automation for both Android and iOS. I want to list all available test case like this;

com.android.Class1#testA
com.android.Class1#testB
com.android.Class2#testA
...

I have already found this for Android;

adb shell am instrument -w -r -e log true -e package 'com.android' com.android.test/androidx.test.runner.AndroidJUnitRunner

But this code give me only the test cases that installed on connected device. Is there any way to list all available test cases?

mercredi 30 octobre 2019

How to test In App Update In Android Manually

I have implemented In-app update in my app and followed the below mentioned steps to test it,

  1. Uninstall the original app from device.

  2. Install the app from the Google Play Store

  3. Uninstall the app again.

  4. Generate signed apk with the new feature, with a lower versionCode than Google Play version Install this apk.

Above mentioned steps will not work in some devices.

So I want to know if there is any alternative way for this testing.

Imported the test cases from VIP Test modeller but it not importing under test plan. It is showing under the board section

Imported the test cases from VIP Test modeller but it not importing under test plan. It is showing under the board section.

Kindly let me know if i need to change any configuration in Azure.

Why does Jasmine ignore ajax-call?

So we've got this weird frontend at our company written with AngularJS and I have the pleasure of writing automated tests for our new CI setup with Jasmine. However at the moment I'm not even trying to test the frontend, I just wanna get Jasmine to send out a simple ajax-call and print out the result.

I've tried this: https://coderwall.com/p/pbsi7g/testing-ajax-calls-with-jasmine-2 and this: How do I verify jQuery AJAX events with Jasmine? this: Mocking jQuery ajax calls with Jasmine and many more.

So here's my current code:

it('test', function (done) {
                jasmine.Ajax.install();
                $.ajax({
                    url: "http://5da98016de10b40014f37142.mockapi.io/api/networks",
                    contentType: "application/json",
                    dataType: "json",
                    success: function(result){
                        console.log(result);
                        expect(result).toBeDefined();
                        done();
                    },
                    error: function(){
                        console.log("ERROR");
                        done();
                    }
                });

            });

I've tried putting the call in a seperate function outside the "it()", using a variable to write the result of the call to and checking its value...

However its always the same problem. Jasmine straight up ignores the call and wont even enter the $.ajax() function.

I am pretty new to the topic of writing tests, never used jasmine before yesterday so have mercy if I'm just totally forgetting anything here.

best method to mock http request without use of url in django

I have 2 class like below I don't have url path for class A what is the best method to test process_get? I tried mocking a http request with request in rest_framework.request but they don't let me set query_params witch i need to set

class A(APIView):
     def process_get(self, request, form):
       pass

class B(A):
  def get(self,request)
    pass

There's a way to export the result of the unit test in visual studio code?

I have this test in adonis.js - node.js:

test('realiza login com o usuário registrado', async ({ client }) => {
  const response = await client
    .post('login')
    .send({
      username: 'usuarioteste',
      password: 'usuarioteste'
    })
    .end()

    response.assertStatus(200)

})

When i run adonis test i receive:

  Usuario
superagent: Enable experimental feature http2
    ✓ make a register (411ms)
    ✓ login (132ms)

   PASSED 

  total       : 2
  passed      : 2
  time        : 903ms

There's a way to export this log to a external file?

How to test if the handleClick have been called?

I have the following functional component:

import React, { useState } from 'react';
import logo from './logo.svg';
import './App.css';

export function App() {
  const [item, setItem] = useState(false);

  function handleChange(e) {
    console.log(e.target.value);
  }

  function handleClick() {
    setItem(!item);
  }

  return (
    <div className="App">
      <header className="App-header">
        <img src={logo} className="App-logo" alt="logo" />
        <p>
          Edit <code>src/App.js</code> and save to reload.
        </p>
        <a
          className="App-link"
          href="https://reactjs.org"
          target="_blank"
          rel="noopener noreferrer"
        >
          Learn React
        </a>
        {item && <div id="saludo">hola</div>}
        <select
          name="select"
          id="selector"
          defaultValue="value2"
          onChange={handleChange}
        >
          <option value="value1">Value 1</option>
          <option value="value2">Value 2</option>
          <option value="value3">Value 3</option>
        </select>
        <button onClick={handleClick}>I'm a button</button>
      </header>
    </div>
  );
}

export default App;

I have the following test:

import React from 'react';
import { render, unmountComponentAtNode } from 'react-dom';
import { act, Simulate } from 'react-dom/test-utils';
import App from './App';

describe('test', () => {
  let container = null;
  beforeEach(() => {
    // setup a DOM element as a render target
    container = document.createElement('div');
    document.body.appendChild(container);
  });

  afterEach(() => {
    // cleanup on exiting
    unmountComponentAtNode(container);
    container.remove();
    container = null;
  });
  it('renders without crashing', () => {
    const div = document.createElement('div');
    render(<App />, div);
    unmountComponentAtNode(div);
  });


  it('various test', () => {
    act(() => {
      render(<App />, container);
    });

    const button = container.querySelector('button');
    act(() => {
      Simulate.click(button);
    });

    let element = document.getElementById('saludo');
    expect(element.innerHTML).toBe('hola');

    act(() => {
      Simulate.click(button);
    });

    element = document.getElementById('saludo');
    expect(element).toBeNull();
    let selector = document.getElementById('selector');
    expect(selector.value).toBe('value2');

    selector.value = 'value3';
    Simulate.change(selector);

    expect(selector.value).toBe('value3');
  });
});

But now I want to test the internal funcion and the state and can't find a way to test the hanldeChange to have been called or the handleClick to have been called.

testing select on change with react test utils

I have the following component:

import React from 'react';
import { useHistory } from 'react-router-dom';
import Logo from '../../Commons/Components/Logo';

const ComponentList = () => {
  let history = useHistory();

  const handleSelectChange = e => {
    history.push(`/component-list/${e.target.value}`);
  };

  return (
    <div className="container">
      <div className="row">
        <div className="col-md">
          <section className="app">
            <Logo />
          </section>
        </div>
      </div>
      <div className="row">
        <div className="col-md"></div>
        <div className="col-md">
          <section className="app">
            <select
              id="components-select"
              name="select"
              onBlur={handleSelectChange}
              onChange={handleSelectChange}
            >
              <option value="">-- Componentes --</option>
              <option value="ui-textarea">ui-textarea</option>
              <option value="ui-collapse">ui-collapse</option>
              <option value="ui-checkbox">ui-checkbox</option>
              <option value="ui-input">ui-input</option>
              <option value="ui-select">ui-select</option>
              <option value="ui-button">ui-button</option>
              <option value="ui-input-date-picker">ui-input-date-picker</option>
              <option value="ui-input-iban">ui-input-iban</option>
              <option value="ui-tooltip">ui-tooltip</option>
              <option value="ui-carousel">ui-carousel</option>
              <option value="ui-document-viewer">ui-document-viewer</option>
              <option value="ui-loader">ui-loader</option>
              <option value="ui-card">ui-card</option>
              <option value="ui-charts">ui-charts</option>
              <option value="ui-data-grid">ui-data-grid</option>
              <option value="ui-list">ui-list</option>
              <option value="ui-modal">ui-modal</option>
              <option value="ui-pagination">ui-pagination</option>
              <option value="ui-wizard">ui-wizard</option>
              <option value="ui-icon">ui-icon</option>
            </select>
          </section>
        </div>
        <div className="col-md"></div>
      </div>
    </div>
  );
};

export default ComponentList;

And I am trying to test this way:

it('should call history.push on change component list selected value', async () => {
    const history = createMemoryHistory();
    history.push = jest.fn();

    act(() => {
      render(
        <MemoryRouter>
          <ComponentList />
        </MemoryRouter>,
        container
      );
    });

    const element = document.getElementById('components-select');

    act(async () => {
      await Simulate.select(element, { target: { value: 'ui-textarea' } });
    });


    expect(element.value).toBe('ui-textarea');
    expect(element.options[1].selected).toBeTruthy();
    expect(history.push).toHaveBeenCalled();
  });

but can't make it work, I also tried:

act(async () => {
      await Simulate.change(element, { target: { value: 'ui-textarea' } });
    });

How to test functional component's local function with React test utils?

I have the following functional component:

import React, { useState } from 'react';
import logo from './logo.svg';
import './App.css';

export function App() {
  const [item, setItem] = useState(false);

  function handleChange(e) {
    console.log(e.target.value);
  }

  function handleClick() {
    setItem(!item);
  }

  return (
    <div className="App">
      <header className="App-header">
        <img src={logo} className="App-logo" alt="logo" />
        <p>
          Edit <code>src/App.js</code> and save to reload.
        </p>
        <a
          className="App-link"
          href="https://reactjs.org"
          target="_blank"
          rel="noopener noreferrer"
        >
          Learn React
        </a>
        {item && <div id="saludo">hola</div>}
        <select
          name="select"
          id="selector"
          defaultValue="value2"
          onChange={handleChange}
        >
          <option value="value1">Value 1</option>
          <option value="value2">Value 2</option>
          <option value="value3">Value 3</option>
        </select>
        <button onClick={handleClick}>I'm a button</button>
      </header>
    </div>
  );
}

export default App;

I have the following test:

import React from 'react';
import { render, unmountComponentAtNode } from 'react-dom';
import { act, Simulate } from 'react-dom/test-utils';
import App from './App';

describe('test', () => {
  let container = null;
  beforeEach(() => {
    // setup a DOM element as a render target
    container = document.createElement('div');
    document.body.appendChild(container);
  });

  afterEach(() => {
    // cleanup on exiting
    unmountComponentAtNode(container);
    container.remove();
    container = null;
  });
  it('renders without crashing', () => {
    const div = document.createElement('div');
    render(<App />, div);
    unmountComponentAtNode(div);
  });

  it('should click the button', () => {
    act(() => {
      render(<App />, container);
    });

    act(() => {
      container
        .querySelector('button')
        .dispatchEvent(new MouseEvent('click', { bubbles: true }));
    });

    let element = document.getElementById('saludo');
    expect(element.innerHTML).toBe('hola');

    act(() => {
      container
        .querySelector('button')
        .dispatchEvent(new MouseEvent('click', { bubbles: true }));
    });

    element = document.getElementById('saludo');
    expect(element).toBeNull();

    act(() => {
      container
        .querySelector('button')
        .dispatchEvent(new MouseEvent('click', { bubbles: true }));
    });

    element = document.getElementById('saludo');
    expect(element.innerHTML).toBe('hola');
  });

  it('various test', () => {
    act(() => {
      render(<App />, container);
    });

    const button = container.querySelector('button');
    act(() => {
      Simulate.click(button);
    });

    let element = document.getElementById('saludo');
    expect(element.innerHTML).toBe('hola');

    act(() => {
      Simulate.click(button);
    });

    element = document.getElementById('saludo');
    expect(element).toBeNull();
    let selector = document.getElementById('selector');
    expect(selector.value).toBe('value2');

    selector.value = 'value3';
    Simulate.change(selector);

    expect(selector.value).toBe('value3');
  });
});

But now I want to test the internal funcion and the state and can't find a way to test the hanldeChange to have been called or the handleClick to have been called.

adonis test returning message: "superagent: Enable experimental feature http2"

My testing is passing normally, but i’m receiving the message: “superagent: Enable experimental feature http2”

There’s a way to hide this message?

My test:

test('realiza um registro de usuário', async ({ client }) => {

      const response = await client
        .post('users')
        .header('Authorization', 'bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1aWQiOjMsImlhdCI6MTU3MjQ0MTA2M30.qN4OMAznOj4blEECSUs-miIMn8DLiAvonrYKxysH7y0')
        .send({
          username: 'usuarioteste',
          permission: 'Administrador',
          status: false,
          user_id: null,
          password: 'usuarioteste'
        })
        .end()

      response.assertStatus(200)
      response.assertJSONSubset({
        username: 'usuarioteste',
        permission: 'Administrador',
        status: false
      })
    })

When i run adonis test i receive:

superagent: Enable experimental feature http2
    ✓ realiza um registro de usuário (447ms)

   PASSED

There's a way to hide this superagent: Enable experimental feature http2 warning? Someone can explain me what this is happen?

Cannot add custom matchers for Jasmine using typescript and angular

I'm trying to add multiple new custom matchers to jasmine and use it, the goal is to create a file that register 2 matchers and all that matchers must be available

so i've got a file named custom-matchers.ts

/// <reference path="custom-matchers.d.ts"/>

(() => {
  beforeEach(async () => {
    jasmine.addMatchers({
      toHaveText: () => {
        return {
          compare: (actual, expected) => {
            return {
              pass: actual.getText().then(pass => pass === expected)
            };
          },
        };
      },
      toBePresent: () => {
        return {
          compare: (actual, expected) => {
            return {
              pass: actual.isPresent().then(pass => pass === expected),
            };
          },
        };
      }
    });
  });
})();

A type definition file named custom-matchers.d.ts (i've tried with a different name than the references file but it's the same):

declare namespace jasmine {
  interface Matchers<T> {
    toHaveText(text: any): boolean;
    toBePresent(text: any): boolean;
  }
}

this file is registered in another file named e2e-helper, this file is imported in every test file e2e-helper.ts:

/// <reference path="custom-matchers.d.ts"/>
import './custom-matchers';

When i try to use it in a test file for exemple:

expect(object as any).toBePresent();

I got an error :

Property 'toBePresent' does not exist on type 'Matchers'

Testing a sever launching cli application - Python, Flask

Question How to test a blocking cli application in python?

What I am trying to accomplish

I have a manage.py file, which is used to start the flask development web server (and other admin stuff that are out of the scope of this question). Typically I would run this as python manage.py runserver, but for keeping this question simple and relevant I have edited that file (see Minimal, Reproducible Example) to just run the server. So the below command works

$ python manage.py 
 * Serving Flask app "app" (lazy loading)
 * Environment: production...

The issue comes with testing the functionality of this cli application. Click has the CliRunner.invoke(), which allows one to test cli applications. But in this particular case, it fails as the server is running.

from manage import runserver
from click.testing import CliRunner
import requests
import unittest


class TestManage(unittest.TestCase):

    def test_runserver(self):
        runner = CliRunner()
        runner.invoke(runserver)
        # It blocks above, does not go below!
        r = requests.get('http://127.0.0.1')
        assert r.status_code == 200


if __name__ == '__main__':
    unittest.main()

Attempts at solution

After trying many approaches I have found a "solution", which is to spawn a Process to start the server, run the tests and then terminating the process on exit (this is included in the example). This feels more like a hack rather than a "real" solution.

Minimal, Reproducible Example

  • Folder structure

    flask_app/
       - app.py
       - manage.py
       - test_manage.py
    
  • app.py

    from flask import Flask
    
    app = Flask(__name__)
    
    
    @app.route('/')
    def hello():
        return 'Hello, World!'
    
  • manage.py

    import click
    from app import app
    
    
    @click.command()
    def runserver():
        app.run()
    
    
    if __name__ == "__main__":
        runserver()
    
  • test_manage.py

    from manage import runserver
    from multiprocessing import Process
    from click.testing import CliRunner
    import requests
    import unittest
    
    
    class TestManage(unittest.TestCase):
    
        def setUp(self):
            self.runner = CliRunner()
            self.server = Process(
                target=self.runner.invoke,
                args=(runserver, )
            )
            self.server.start()
    
        def test_runserver(self):
            r = requests.get('http://127.0.0.1')
            assert r.status_code == 200
    
        def tearDown(self):
            self.server.terminate()
    
    
    if __name__ == '__main__':
        unittest.main()
    

    And run the test using $ python test_manage.py.

Django Python Testing: How do I assert that some exception happened and was handled

I'm trying to test a Django view using the client post method.

Inside the view I catch an exception and return a 500 response

My test that checks the 500 response passes however I would also now like to test what actual Exception class was raised during the view call.

Is there a way to do that?

I've tried self.assertRaises(SpecificException) as a context manager but since it is already handled and caught, it won't do what I'm looking for:

with self.assertRaises(SpecificException):
    response = self.client.post(
        path=reverse("stock:getstockinvt"),
        data={'product_id': 1},
        content_type='application/json'
    )

If this can't be done then so be it, I am just wondering if this can.

The alternative is to unit test the function that raises the exception.

gstreamer hlssink is not creating a m3u file or multiple ts files - instead one gigaintic ts file

when running gstreamer with udp receiver pipeline the hls sink creates one gigantic file instead of creating multiple files

I want to stream the audio test source over the network, then at the receiving end save multiple files with playlist file using hlssink

sender

gst-launch-1.0 audiotestsrc wave=8  ! rawaudioparse use-sink-caps=false format=pcm pcm-format=u8 sample-rate=8000 num-channels=1 ! rtpL8pay ! udpsink host=127.0.0.1 port=16504

receiver

gst-launch-1.0 udpsrc port=16504 caps="application/x-rtp" ! queue ! rtppcmudepay ! mulawdec ! hlssink max-files=5

I am looking for suggestions on which report writing software can create a report puling from a csv file as the source

Which report writing software would you suggest which can create a report by pulling from a CSV file as the source. Calculations would have to be made such as totals etc.. It should be simple to use, and not complex, and perhaps the ability to automate the report. Any suggestions?

Executing ArchUnit tests from a java library

I have a bunch of Java Dropwizard microservices which have a similar structure. My objective is to write a set of ArchUnit test cases which are required to be run in each service and the build should fail if these test cases fail.

Since the checks will be similar, is it possible that I extract out all the test cases in a common library and add it as a dependency in every service ? How to add test cases from a library to be run in the build of a service ?

Using Jest within webpack 4 project throws 'Jest encountered an unexpected token' error

I am trying to include Jest in my current React app project to do some testing.

This worked great for a simple javascript file, but when I try and reference a file which uses import it throws the error 'Jest encountered an unexpected token'

enter image description here

I've been looking through the following guide and found I didn't have a .bablerc file

https://jestjs.io/docs/en/webpack

So I added one with the following, as per the example, but this has had no affect when I run npm run test

// .babelrc
{
  "presets": [["env", {"modules": false}]],

  "env": {
    "test": {
      "plugins": ["transform-es2015-modules-commonjs"]
    }
  }
}

My current package.json I have added the following as per the instructions on a different page.

 "scripts": {
    "test": "jest"
  },

My webpack config.js file looks like this.

const path = require('path');
const webpack = require('webpack');
const ExtractTextPlugin = require('extract-text-webpack-plugin');
const bundleOutputDir = './wwwroot/dist';

const javascriptExclude = [];

module.exports = (env) => {
    const isDevBuild = !(env && env.prod);

    const javascriptExclude = [
        /(node_modules|bower_components)/,
        path.resolve('./ClientApp/scripts/thirdParty')
    ];

    // if dev build then exclude our js files from being transpiled to help speed up build time
    if (isDevBuild) {
        javascriptExclude.push(path.resolve('./ClientApp/scripts'));
    }

    return [
        {
            stats: { modules: false },
            entry: { main: "./ClientApp/index.js" },
            resolve: { extensions: [".js", ".jsx"] },
            output: {
                path: path.join(__dirname, bundleOutputDir),
                filename: "[name].js",
                publicPath: "dist/"
            },
            module: {
                rules: [
                    {
                        test: /\.js$/,
                        exclude: javascriptExclude,
                        use: {
                            loader: "babel-loader",
                            options: {
                                presets: ["babel-preset-env", "react", 'stage-2'],
                                plugins: [
                                    "react-hot-loader/babel",
                                    "transform-class-properties"
                                ],
                                compact: false
                            }
                        }
                    },
                    {
                        test: /\.css$/,
                        use: isDevBuild
                            ? ["style-loader", "css-loader"]
                            : ExtractTextPlugin.extract({ use: "css-loader?minimize" })
                    },
                    {
                        test: /\.(jpe|gif|png|jpg|woff|woff2|eot|ttf|svg)(\?.*$|$)/, use: [
                            {
                                loader: 'url-loader',
                                options: {
                                    limit: 25000,
                                    publicPath: '/dist/'
                                },
                            },
                        ]
                    }
                ]
            },
            plugins: [
                new webpack.DllReferencePlugin({
                    context: __dirname,
                    manifest: require("./wwwroot/dist/vendor-manifest.json")
                })
            ].concat(
                isDevBuild
                    ? [
                        // Plugins that apply in development builds only
                        new webpack.SourceMapDevToolPlugin({
                            filename: "[file].map", // Remove this line if you prefer inline source maps
                            moduleFilenameTemplate: path.relative(
                                bundleOutputDir,
                                "[resourcePath]"
                            ) // Point sourcemap entries to the original file locations on disk
                        })
                    ]
                    : [
                        // Plugins that apply in production builds only
                        new webpack.optimize.UglifyJsPlugin(),
                        new ExtractTextPlugin("style.css"),
                        new webpack.DefinePlugin({ 'process.env': { 'NODE_ENV': JSON.stringify('production') } })
                    ]
            )
        }
    ];
};

Can anyone help?

Is it possible to convert a .robot file to .java in source code or vice versa?

I'm testing web application cases on RIDE and on eclipse via red robot-editor. I recently added the selenium-java library to a java project and I would like to convert the created class into a .robot or txt file to run it with laugh or vice versa.

Laravel test lifecycle hook

I have a trait that records activity on the created lifecycle hook. In my test class, if I run any given single test with RefreshDatabase it fails saying the table is empty. If I run the test without RefreshDatabase it passes but then fails the next time for obvious reasons. If I run the entire test class with RefreshDatabase only the first one fails and the rest pass. If I run the entire test class without RefreshDatabase the first one passes and the rest fail because of obvious reasons (duplicate key errors).

Here is one of the tests behaving this way:

    /** @test */
    public function user_can_access_their_own_activity()
    {
        $this->jsonAs($this->user, 'POST', route('team.store'), [
            'display_name' => 'Test Team',
        ])->assertStatus(200);

        $this->assertDatabaseHas('activities', [
            'user_id' => $this->user->getKey(),
            'type' => 'created_team',
        ]);

        $response = $this->jsonAs($this->user, 'GET', route('activity.index'))
            ->assertStatus(200);

        $response->assertJsonFragment([
            'type' => 'created_team',
            'uuid' => $this->user->uuid,
        ]);
    }

If I need to share any more info please let me know. Thanks!

Angular Testing: How to provide Injector?

I try to build an Angular component which depends on an Injector providing an InjectorToken:

import { Component, Injector, InjectionToken } from '@angular/core';

const TOKEN = new InjectionToken<string>('token');

@Component({
  selector: 'app-test',
  templateUrl: './test.component.html',
  styleUrls: ['./test.component.scss']
})
export class TestComponent {
  constructor(injector: Injector) {
    injector.get(TOKEN);
  }
}

However, I can not get the test running for this component and get following error:

NullInjectorError: StaticInjectorError(DynamicTestModule)[InjectionToken token]: 
  StaticInjectorError(Platform: core)[InjectionToken token]: 
    NullInjectorError: No provider for InjectionToken token!

The following approach to provide the Injector in the TestBed didn't work:

TestBed.configureTestingModule({
  declarations: [ TestComponent ],
  providers: [
    { provide: Injector, useValue: Injector.create({providers: [
      {
        provide: new InjectionToken<string>('token'),
        useValue: '',
        deps: []
      }
    ]}) }
  ]
})

How can I setup the test environment correctly?

What is 'ng' in cypress

describe('View: Phone list', function() {
beforeEach(function() {
  cy.visit('/index.html#!/phones')
  cy.ng('model', '$ctrl.query').as('q')
})

Pleas tell me someone what this 'ng' mean in this code. I tried to find out in Mocha and Cypress docs, but find nothing.

How to find the most popular used devices for an app on Google Play?

In our team we would like to buy the most used devices for our app published on Google Play for testing.

Any idea how to get this information?

Capybara test makes Rails throw DoubleRender error in Test environment, never occurs in Development

In a website, I have a modal, where a row from a table it contains can be selected, and a Submit button is then pressed. The button action makes an ajax POST to a server controller to create a new data structure. The controller then returns a JSON string, which contains the path, to which the Javascript will then redirect the user.

When I manually test this in the Development environment, it works fine and it has never thrown any errors at all. When I however run a test with Capybara, which simulates the same behaviour, the controller throws the error: "Render and/or redirect were called multiple times in this action. Please note that you may only call render OR redirect, and at most once per action. Also note that neither redirect nor render terminate execution of the action, so if you want to exit an action after redirecting, you need to do something like "redirect_to(...) and return". What's weird, is that it is completely unpredictable - sometimes, the error is thrown, and sometimes it isn't and the test passes. I cannot find any pattern in the behaviour, seems completely random. It also occurs when I insert a pause in the test and try to do it manually. Again, the error appears randomly. The error is caused by the last line in my controller function add_subset, which calls the render json:.

My controller code:

def add_subset
   authorize Thesaurus, :edit?
   results = Thesaurus.history_uris(identifier: the_params[:identifier], scope: IsoNamespace.find(the_params[:scope_id]))
   thesaurus = Thesaurus.find_minimum(results.first)
   thesaurus = edit_item(thesaurus)
   new_mc = thesaurus.add_subset(the_params[:concept_id])
   path = edit_subset_thesauri_managed_concept_path(new_mc, source_mc: new_mc.subsets_links.to_id, context_id: params[:ctxt_id])
   render json: { redirect_path: path }, :status => 200 

Ajax POST

$.ajax({
    url: "/thesauri/"+data["scope_id"]+"/add_subset",
    type: "POST",
    data: JSON.stringify({thesauri: data, ctxt_id: context_id}),
    dataType: 'json',
    contentType: 'application/json',
    error: function (xhr, status, error) {
      displayError("An error has occurred.");
    },
    success: function(result) {
      window.location = result["redirect_path"];
    }
});

Test

   [...]    
   find(:xpath, "//tr[contains(.,'C85494')]/td/a", :text => 'Show').click
   wait_for_ajax
   expect(page).to have_content("PKUNIT")
   click_link "Subsets"
   expect(page).to have_content("No subsets found.")
   click_button "+ New subset"
   expect(page).to have_content("Select Terminology")
   find(:xpath, "//*[@id='thTable']/tbody/tr[1]/td[1]").click
   click_button "Select"
   wait_for_ajax
   expect(page).to have_content("Edit Subset")

Does anyone have any idea what could possibly cause this?

end to end or integration or system testing

What kind of testing is this? Is the following considered end to end testing or is it an integration testing or a system testing? If not if you can elaborate on the types of testing in context of the code example.

I'm basically calling the endpoint on localhost and asserting on the status/output.

let assert = require("assert");
let http = require("http");

describe("EndToEndTesting", function() {

    describe("GET /users", function() {
        it("should return list of users", function(done) {
            http.request({
                method: "GET",
                hostname: "localhost",
                port: 3000,
                path: "/users"
            }, function(response){
                let buffers = [];
                response.on("data", buffers.push.bind(buffers));
                response.on("end", function(){
                    let body = JSON.parse(Buffer.concat(buffers).toString());
                    assert(response.statusCode == 200);
                });
            }).on("error", console.error).end();
        });
    }
}

mardi 29 octobre 2019

Cypress Cucumber Step running multiple steps

Lets say I have

Step1

Step2

Step3

Is it possible to have Step4 which runs all 3 of them?

How to write snapshot test for SurveyScreen?

I am tyring to write snapshot test for a screen called SurveyScreen which uses react-native-simple-survey library.

In the render function of SurveyScreen.js, it looks like:

render() {
        return (
            <View style={styles.background}>
                <View style=>
                    <ScrollView contentContainerStyle= keyboardShouldPersistTaps='handled'>
                        <SimpleSurvey
                            survey={SurveyLocations[this.props.navigation.getParam('survey')]}
                            renderSelector={this.renderButton.bind(this)}
                            // containerStyle={styles.surveyContainer}
                            selectionGroupContainerStyle={styles.surveySelectionGroupContainer}
                            navButtonContainerStyle={styles.surveyNavButtonContainer}
                            renderPrevious={this.renderPreviousButton.bind(this)}
                            renderNext={this.renderNextButton.bind(this)}
                            renderFinished={this.renderFinishedButton.bind(this)}
                            renderQuestionText={this.renderQuestionText}
                            onSurveyFinished={(answers) => this.onSurveyFinished(answers)}
                            onAnswerSubmitted={(answer) => this.answerSubmit(answer)}
                            renderTextInput={this.renderTextBox}
                            renderDateInput={this.renderDateBox}
                            renderTimeInput={this.renderTimeBox}
                            renderNumericInput={this.renderNumericInput}
                            renderInfo={this.renderInfoText}
                        />
                    </ScrollView>
                </View>

            </View>
        );
    }

And this is the test file for it

import React from "react";
import SurveyScreen from "../app/screens/SurveyScreen"
import renderer from "react-test-renderer"

describe("<SurveyScreen/>", ()=>{
  it("SurveyScreen snapshot", ()=>{
    let wrapper = null;
    const spyNavigate = jest.fn();
    const props = {
      navaigation: {
        navigate: spyNavigate,
        state:{}
      }
    }
    const params = {
      token: 'someToken'
    }

    beforeEach(() => {
      wrapper = renderer.create(<SurveyScreen {...props} />)
      wrapper.setState({params: params})
    })
    expect(wrapper).toMatchSnapshot();
  });
});

In the auto-created snap file, there is nothing... What should I do to make it working? Thank you!

Adding third-party cookie exception to Cypress configuration

Using Cypress, JavaScript end to end testing framework, is there a supported mechanism to add a third-party cookie exception, or some other Chromium preference (or Electron/ if not using e.g. --browser chromium)?

I have found a mechanism to do this but it involves wrapping the browser executable, effectively a man-in-the-middle between Cypress and the "browser" binary.

How to implement unit testing for a CLI that interacts with sqlite

I am building a command line tool that uses sqlite with go. This is the first time I try to do unit testing so I am having a hard time working out an appropriate structure for my code

For example, the CLI will open a sqlite database and check if it has the tables needed. This part of the function is handled by the cmd package. If not it will bootstrap those tables. Say I want to test that it correctly detects the absence of those tables and correctly bootstrap the database.

Some problems I have:

  1. Should I create _test files as a separate package or as part of the cmd package?

  2. I am trying to generate some files and compare them with some preexisting files. Where should I be generating those files (some temp folder?) ? Am I responsible for removing those files before the end of those tests?

  3. The cmd package has an unexported variable db that points to the database it should use in production. So the package defines db, err := sql.Open(...). Now during testing I want to use another database. What I am doing now is to set db to point to something else at the start of the test function. This feels sloppy somehow. Is it the right way to do it?

  4. Combination of 1 and 3: If I am to do testing in a separate package, how am I supposed to make this it use some different database given that it is unexported?

I feel some of them might be silly questions. But anyways, Thank you for any help!

How do I test both sides of a `if @generated`?

Consider I might have some generated functions I want to test as below. Following recommendation of the manual I made them optionally generated.

# This code is taken directly from Base. 
# https://github.com/JuliaLang/julia/blob/592748adb25301a45bd6edef3ac0a93eed069852/base/namedtuple.jl#L220-L231
# So importing some of the helpers
using Base: merge_names, merge_types, sym_in

function my_merge_fail1(a::NamedTuple{an}, b::NamedTuple{bn}) where {an, bn}
    if @generated
        names = merge_names(an, bn)
        types = merge_types(names, a, b)
        vals = Any[ :(getfield($(sym_in(n, bn) ? :b : :a), $(QuoteNode(n)))) for n in names ]
        :(error("typo1"); NamedTuple{$names,$types}(($(vals...),)) )
    else
        names = merge_names(an, bn)
        types = merge_types(names, typeof(a), typeof(b))
        NamedTuple{names,types}(map(n->getfield(sym_in(n, bn) ? b : a, n), names))
    end
end

function my_merge_fail2(a::NamedTuple{an}, b::NamedTuple{bn}) where {an, bn}
    if @generated
        names = merge_names(an, bn)
        types = merge_types(names, a, b)
        vals = Any[ :(getfield($(sym_in(n, bn) ? :b : :a), $(QuoteNode(n)))) for n in names ]
        :(NamedTuple{$names,$types}(($(vals...),)) )
    else
        error("typo2")
        names = merge_names(an, bn)
        types = merge_types(names, typeof(a), typeof(b))
        NamedTuple{names,types}(map(n->getfield(sym_in(n, bn) ? b : a, n), names))
    end
end

Now, it turns out I made a "typo", in each. I mistakenly included error(...) in both of them. In my_merge_fail1 I made the mistike in the @generated branch, and in my_merge_fail2 in the nongenerated branch.

My tests seem to only catch it in one branch:

julia> using Test

julia> @test my_merge_fail1((a=1,), (b=2,)) == (a=1, b=2)
Error During Test at REPL[12]:1
  Test threw exception
  Expression: my_merge_fail1((a = 1,), (b = 2,)) == (a = 1, b = 2)
  typo1
  Stacktrace:
   [1] error(::String) at ./error.jl:33
   [2] macro expansion at ./REPL[6]:2 [inlined]
   [3] my_merge_fail1(::NamedTuple{(:a,),Tuple{Int64}}, ::NamedTuple{(:b,),Tuple{Int64}}) at ./REPL[6]:2
   [4] top-level scope at REPL[12]:1
   [5] eval(::Module, ::Any) at ./boot.jl:331
   [6] eval_user_input(::Any, ::REPL.REPLBackend) at /usr/local/src/julia/julia-master/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:86
   [7] macro expansion at /usr/local/src/julia/julia-master/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:118 [inlined]
   [8] (::REPL.var"#26#27"{REPL.REPLBackend})() at ./task.jl:333

ERROR: There was an error during testing

julia> @test my_merge_fail2((a=1,), (b=2,)) == (a=1, b=2)
Test Passed

How can I improve my tests?

Micronaut @Property value that is a list

How can I define some specific properties for testing that is a list not just a string?

The doc explains how to do with string, but I can't set the value to a list...

application.yml

items:
  - "Item 1"
  - "Item 2"

Test file:

@Test
@Property(name = "items", value = "Item 1")
fun justWithOneItem() {
    // ...
}

On the actual code, this works (as documented here)

Project file:

@Singleton
class SomeClass {
    @set:Inject
    @setparam:Property(name = "items")
    var items: List<String>? = null

    // ...
}

Thank you!

Is there a tool, which will make sure that I won't import typescript modules against the rules of clean architecture?

In clean architecture, dependencies between layers are clearly defined:

enter image description here

Code in inner circles should not import from outside circles. Is there a tool for JS or TS to check if there are some imports from "illegal" directories?

So, for example, to make sure, that in domain directories, are no imports from infrastructure, but to allow importing stuff from domain in infrastructure module...

VSCode Debugging - Opening Files on launch

I am new to VSCode and I implemented a vscode extension. I started to write tests using runTests from vscode-tests.

To Debug I wrote this configuration:

"name": "Test Syntax Coloring",
"type": "extensionHost",
"request": "launch",
"runtimeExecutable": "${execPath}",
"args": [
    "--extensionDevelopmentPath=${workspaceFolder}",
    "--extensionTestsPath=${workspaceFolder}/out/tests/suite/index"
],
"outFiles": [
    "${workspaceFolder}/out/tests/**/*.js"

I searched for an ability to open a file on launch, but I couldn't find a way to do so. Is it possible? How?

Mobile Testing -AndroidMobileCapabilityType.App_Packages

Using eclipse IDE, while using the "AndroidMobileCapabilityType.App_Packages" i see eclipse not identifying it. Looks like some jar files missing, please check and confirm if i need to add any jar files in build path and if yes where do i copy it. Sorry it is a basic question and im still a learner.

screen shot for reference. enter image description here

Regards, Sanjay

Jest - Snapshot test fail

I am receiving the error below after running Jest. I dont have much experience using Jest so hoping someone can please provide information on why this test is failing.

Results: Jest test result:

- Snapshot
+ Received
- <span
-   className="icon icon-dismiss size-2x "
-   style={
-     Object {
-       "color": "inherit",
+ <Fragment>
+   <span
+     className="icon icon-dismiss size-2x "
+     style={
+       Object {
+         "color": "inherit",
+       }
      }
-   }
-   title=""
- />
+     title=""
+   />
+ </Fragment>

  26 |     };
  27 |     const wrapper = shallow(<Icon {...props} />);
> 28 |     expect(toJson(wrapper)).toMatchSnapshot();
     |                             ^
  29 |   });
  30 | });

Here is the test file: Test File

Component.spec.jsx

describe('Snapshot test - <Icon />', () => {
  it('renders Icon correctly in clean state', () => {
    const props = {
      icon: 'dismiss',
    };
    const wrapper = shallow(<Icon {...props} />);
    expect(toJson(wrapper)).toMatchSnapshot();
  });
  it('renders Icon correctly when colour provided', () => {
    const props = {
      icon: 'dismiss',
      color: '#000000',
    };
    const wrapper = shallow(<Icon {...props} />);
    expect(toJson(wrapper)).toMatchSnapshot();
  });

});

Here is the component file: component.js


class Icon extends React.PureComponent<IconProps> {
  render() {
    const {
      className, icon, size, colour, style, title,
    } = this.props;

    return (
      <React.Fragment>
        {icon === 'spinner' ? <LoadingSpinnerSmall /> : (
          <span
            title={title}
            className={`${css.icon} ${css[`icon-${icon}`]} ${css[`size-${size}`]} ${className}`}
            style=
          />
        )
      }
      </React.Fragment>
    );
  }
}

export default Icon;

Jest can't run a standalone test

When running all the tests at once, everything is good, but when I try to run only one file from the terminal like jest Header.test.js it throws an error - like this My package.json

{
  "name": "bmiq-widget",
  "version": "0.1.0",
  "private": true,
  "main": "lib/index.js",
  "dependencies": {
    "@testing-library/jest-dom": "^4.2.0",
    "@testing-library/react": "^9.3.0",
    "bmiq-style-guide": "*",
    "react": "^16.8.6",
    "react-app-polyfill": "^1.0.1",
    "react-dev-utils": "^9.0.1",
    "react-dom": "^16.8.6"
  },
  "devDependencies": {
    "npm-run-all": "*",
    "cross-env": "^5.2.0",
    "@babel/core": "7.4.3",
    "babel-eslint": "^10.0.1",
    "babel-jest": "^24.8.0",
    "babel-loader": "8.0.5",
    "babel-plugin-named-asset-import": "^0.3.2",
    "babel-preset-react-app": "^9.0.0",
    "eslint": "5.16.0",
    "eslint-config-react-app": "^4.0.1",
    "eslint-loader": "^2.1.2",
    "eslint-plugin-flowtype": "^2.50.1",
    "eslint-plugin-import": "^2.16.0",
    "eslint-plugin-jsx-a11y": "^6.2.1",
    "eslint-plugin-react": "^7.12.4",
    "eslint-plugin-react-hooks": "1.6.0",
    "eslint-config-airbnb": "^18.0.1",
    "@svgr/webpack": "4.1.0",
    "webpack": "4.29.6",
    "webpack-cli": "^3.3.0",
    "jest": "24.7.1",
    "jest-environment-jsdom-fourteen": "0.1.0",
    "jest-resolve": "24.7.1",
    "jest-watch-typeahead": "0.3.0"
  },
  "scripts": {
    "build": "npm-run-all build:widget build:lib",
    "build:widget": "cross-env NODE_ENV=production webpack --mode production --config webpack-widget.config.js",
    "build:lib": "cross-env NODE_ENV=production webpack --mode production --config webpack-main.config.js",
    "test": "jest --coverage --b --config jest.config.js"
  },
  "eslintConfig": {
    "extends": "react-app"
  },
  "browserslist": {
    "production": [
      ">0.2%",
      "not dead",
      "not op_mini all"
    ],
    "development": [
      "last 1 chrome version",
      "last 1 firefox version",
      "last 1 safari version"
    ]
  }
}

My .babelrc

{
  "presets": ["@babel/preset-env", "@babel/preset-react"],
  "plugins": [
    "@babel/plugin-proposal-class-properties"
  ]
}

My jest.config.js

module.exports = {
  roots: [
    '<rootDir>/src',
  ],
  collectCoverageFrom: [
    'src/**/*.{js,jsx,ts,tsx}',
    '!src/**/*.d.ts',
  ],
  setupFiles: [
    'react-app-polyfill/jsdom',
  ],
  setupFilesAfterEnv: [],
  testMatch: [
    '<rootDir>/src/**/__tests__/**/*.{js,jsx,ts,tsx}',
    '<rootDir>/src/**/*.{spec,test}.{js,jsx,ts,tsx}',
  ],
  testEnvironment: 'jest-environment-jsdom-fourteen',
  transform: {
    '^.+\\.(js|jsx|ts|tsx)$': '<rootDir>/node_modules/babel-jest',
    '^.+\\.css$': '<rootDir>/config/jest/cssTransform.js',
    '^(?!.*\\.(js|jsx|ts|tsx|css|json)$)': '<rootDir>/config/jest/fileTransform.js',
  },
  transformIgnorePatterns: [
    '[/\\\\]node_modules[/\\\\].+\\.(js|jsx|ts|tsx)$',
    '^.+\\.module\\.(css|sass|scss)$',
  ],
  modulePaths: [],
  moduleNameMapper: {
    '^react-native$': 'react-native-web',
    '^.+\\.module\\.(css|sass|scss)$': 'identity-obj-proxy',
  },
  moduleFileExtensions: [
    'web.js',
    'js',
    'web.ts',
    'ts',
    'web.tsx',
    'tsx',
    'json',
    'web.jsx',
    'jsx',
    'node',
  ],
  watchPlugins: [
    'jest-watch-typeahead/filename',
    'jest-watch-typeahead/testname',
  ],
};

Any idea what's the problem? I've tried to change babel presets, also tried to change transformIgnorePatterns in the jest config file I have done researching for hours and no method helped. The strange fact is that when I running all the tests no error encounter, the error pop up when running only one file. Thanks beforehand.