samedi 31 octobre 2020

Accessing a variable in a data frame by columns number in R?

I hope you haven't gotten corona virus and you are all healthy. I have a data frame as "df" and 41 variables var1 to var41. If i write this command

pcdtest(plm(var1~ 1 , data = df, model = "pooling"))[[1]]

i can see the test value. But i need to apply this test for 41 times. I want to access variable by column number which is "df[1]" for "var1" and "df[41]" for "var41"

pcdtest(plm(df[1]~ 1 , data = dfp, model = "pooling"))[[1]]

Bu it fails. Could you please help me to do this? I will have result in for loop. And i will calculate the descriptive statistics for all the results. But it is very difficult to do test for each variable.

Screenplay pattern - common methods among actions

I'm currently trying out the screenplay pattern. My question regards how to deal with common methods across several actions?

So as an example I have a method for my homepage actions that looks if a checkbox is checked. If the checkbox is not checked it checks the checkbox. If it is checked it does nothing and move to the next step. Now this type of method is needed in other action classes aswell.

My question is what is the best approach to keeping DRY in this case? Should I implement a helper class and create an instance of that class in the action classes that require that method? Or should I create a base class and have my action classes inherit from that? Or is there a better approach altogether?

I want to work according to SOLID but i'm not sure what the best approach is in this case.

Is there a way to get event in spring (+junit) fired after all tests classes?

I use spring 5 + junit 5. And I have two classes - BarIT and FooIT.

@ExtendWith({SpringExtension.class})
@ContextConfiguration(classes = ModuleContextConfig.class)
public class FooIT {

    @Test
    public void foo() {
        System.out.println("Foo");
    }
}

@ExtendWith({SpringExtension.class})
@ContextConfiguration(classes = ModuleContextConfig.class)
public class BarIT {

    @Test
    public void bar() {
        System.out.println("Bar");
    }
}

This is my suite:

@RunWith(JUnitPlatform.class)
@ExtendWith({SpringExtension.class})
@SelectClasses({
    FooIT.class,
    BarIT.class
})
@IncludeClassNamePatterns(".*IT$")
public class SuiteIT {

}

I want to get event when tests in two classes have been executed, I mean event after FooIT.foo() and BarIT.bar(), however, I can't do that. I hoped to get ContextClosedEvent but it is not fired:

@Component
@Scope(value = ConfigurableBeanFactory.SCOPE_SINGLETON)
public class ApplicationListenerBean implements ApplicationListener {

    @Override
    public void onApplicationEvent(ApplicationEvent event) {
        System.out.println("Event:" + event.toString());
    }
}

And this is the output:

Event:org.springframework.context.event.ContextRefreshedEvent..
Event:org.springframework.test.context.event.PrepareTestInstanceEvent..
Event:org.springframework.test.context.event.BeforeTestMethodEvent..
Event:org.springframework.test.context.event.BeforeTestExecutionEvent..
Foo
Event:org.springframework.test.context.event.AfterTestExecutionEvent...
Event:org.springframework.test.context.event.AfterTestMethodEvent...
Event:org.springframework.test.context.event.AfterTestClassEvent...
Event:org.springframework.test.context.event.BeforeTestClassEvent...
Event:org.springframework.test.context.event.PrepareTestInstanceEvent...
Event:org.springframework.test.context.event.BeforeTestMethodEvent...
Event:org.springframework.test.context.event.BeforeTestExecutionEvent...
Bar
Event:org.springframework.test.context.event.AfterTestExecutionEvent...
Event:org.springframework.test.context.event.AfterTestMethodEvent...
Event:org.springframework.test.context.event.AfterTestClassEvent..

Could anyone say how to do it, if it is possible.

Is there any difference between "synthetic" and "fake" data?

Last few days I did a Google search to find "what is fake data and why we use it?" but when I tried to find meaning, encountered the term "synthetic data" and I got stuck. When I search "fake data" Google shows me information about Faker, Bogus etc. which is library that created for provide fake data. On the other hand, when I search "synthetic data" Google shows me information about AI, ML, Data Science etc. and also difference between production and synthetic data in software testing. (Link is here: https://www.softwaretestingnews.co.uk/when-to-use-production-vs-synthetic-data-for-software-testing/)

This is my question; I wanna create an open source fake data library such as Bogus because this will be my Graduation Project. To do that I have to create a literature review. Which term should I research, synthetic data or fake data? I am not sure what the difference between them is, or each one exactly same thing?

Identical feature on two devices but different behaviors/scenarios for BDD testing

We have two versions of an app : Desktop / Mobile

On desktop we have to go to the detail page of a panda -> we click to the change anniversary date button -> pop up where we can enter the new date, we validate and it's done.

On mobile we have to select the menu change anniversary date -> new page where we enter a panda name -> we are send to a new page where we enter the new anniversary date. We validate and it's done.

If I found easy to write scenarios for the desktop, for the mobile I'm blocked on the possible scenario to select a non existing panda. This behavior can't exist on the desktop since we have a list of panda and we have to click on one to go to the detail page. Instead of that, on mobile we have to enter the name of a panda so it's possible we enter a non existent name.

And I'm blocked because the new feature offered is identical on both device. Do I need to make my behaviors/scenarios match between Desktop and Mobile or is it okay that my behaviors/scenarios are a different based on the platform ?

Thank you for your help !

How to run the code using AFL on terminal

I have some of github that I am trying to run using AFL. The code: https://github.com/karimmd/CScanner/tree/cfe7d08bf46b1eed0443f9e27bc089d68a830a45

I wanna run the project and find vulnerablities. I have put the github all files inside a folder code , so the file structure is CScanner-master/code/all the files here. I am using this command on terminal :

   hemlatamahaur@Hemlatas-MacBook-Pro desktop % afl-fuzz -i CScanner-master -o code ./input-testcode.c
afl-fuzz 2.56b by <lcamtuf@google.com>
[+] You have 4 CPU cores and 2 runnable tasks (utilization: 50%).
[+] Try parallel jobs - see /usr/local/Cellar/afl-fuzz/2.57b/share/doc/afl/parallel_fuzzing.txt.
[*] Setting up output directories...
[+] Output directory exists but deemed OK to reuse.
[*] Deleting old session data...
[+] Output dir cleanup successful.
[*] Scanning 'CScanner-master'...
[+] No auto-generated dictionary tokens to reuse.
[*] Creating hard links for all input files...
[*] Validating target binary...

[-] PROGRAM ABORT : Program './input-testcode.c' not found or not executable
         Location : check_binary(), afl-fuzz.c:6873

It keep saying there is no file as input-testcode.c

I am new to AFL, so I might be doing it wrong. How do I run this code using AFL to find the vulnerabilities. Any help is very appreciated.

How to validate menu and submenu are clickable in seleniumn with java

How to validate menu and submenu are clickable in seleniumn with java I need a script for the same any inputs will be very helpful

How can i run "rails routes --expanded" exclusively in Rails6 Test Environment?

Below details are in reference of Rails 6.

I have created category controller using scaffold in the Test environment. Now I want to crosscheck all the routes and its URI using "rails routes --expanded" for Test Environment. How can i do that? As to run that command directly in terminal will give routes of development environment.

Kindly help me with that. I have already checked "rails console -e test" but its exclusively for irb of test env.

Thank you.

vendredi 30 octobre 2020

I'm getting error while executing REST API automation. How to resolve it?

My code is:

package demo1;
import io.restassured.RestAssured;
import static io.restassured.RestAssured.*;
import static org.hamcrest.Matchers.*;

import demo1.Files.PayLoad;
public class PostMethodEx1 {

    public static void main(String[] args) {
     //to validate if add place api is working as expected.
        //given-->all input details
        //when-->submit the API
        //then-->validate the response
        RestAssured.baseURI="https://rahulshettyacademy.com";
        String response=given().log().all().queryParam("key", "qaclick123").header("Content-Type","application/json")
        .body(PayLoad.addPlace())
        .when().post("maps/api/place/add/json")
        .then().assertThat().statusCode(200).body("scope", equalTo("APP"))
        .header("Server", "Apache/2.4.18 (Ubuntu)").extract().response().asString();
        
        System.out.println(response);
        
    }

}

Payload class code is:

package demo1.Files;

public class PayLoad {
  public static String addPlace(){
    
      
      return "{\\r\\n\" + \r\n" + 
            "               \"  \\\"location\\\": {\\r\\n\" + \r\n" + 
            "               \"    \\\"lat\\\": -38.383494,\\r\\n\" + \r\n" + 
            "               \"    \\\"lng\\\": 33.427362\\r\\n\" + \r\n" + 
            "               \"  },\\r\\n\" + \r\n" + 
            "               \"  \\\"accuracy\\\": 50,\\r\\n\" + \r\n" + 
            "               \"  \\\"name\\\": \\\"Akshay-MG\\\",\\r\\n\" + \r\n" + 
            "               \"  \\\"phone_number\\\": \\\"(+91) 983 893 3937\\\",\\r\\n\" + \r\n" + 
            "               \"  \\\"address\\\": \\\"29, side layout, cohen 09\\\",\\r\\n\" + \r\n" + 
            "               \"  \\\"types\\\": [\\r\\n\" + \r\n" + 
            "               \"    \\\"shoe park\\\",\\r\\n\" + \r\n" + 
            "               \"    \\\"shop\\\"\\r\\n\" + \r\n" + 
            "               \"  ],\\r\\n\" + \r\n" + 
            "               \"  \\\"website\\\": \\\"http://google.com\\\",\\r\\n\" + \r\n" + 
            "               \"  \\\"language\\\": \\\"French-IN\\\"\\r\\n\" + \r\n" + 
            "               \"}";
      
  }
    
    
    
}

Error I am getting is: Exception in thread "main" groovy.json.JsonException: Lexing failed on line: 1, column: 1, while reading '<', no possible valid JSON value or punctuation could be recognized.

What is the best way to test cover a method that just call another method?

What is the best way to test or ignore coverage of the following method? Notice that the update method just calls the service and I must cover 80% of the class/code.

enter image description here

TestCafe EC2 Network logs

We are "successfully" running our gherkin-testcafe build on ec2 headless against chromium. The final issue we are dealing with is that at a certain point in the test a CTA button is showing ...loading instead of Add to Bag, presumably because a service call that gets the status of the product, out of stock, in stock, no longer carry, etc. is failing. The tests work locally of course and we have the luxury of debugging locally opening chrome's dev env and inspecting the network calls etc. But all we can do on the ec2 is take a video and see where it fails. Is there a way to view the logs of all the calls being made by testcafe's proxy browser so we can confirm which one is failing and why? We are using.
const rlogger = RequestLogger(/.*/, { logRequestHeaders: true, logResponseHeaders: true });

to log our headers but not getting very explicit reasons why calls are not working.

TestCafe loop over elements and check if class exists within Set

I have a table with a column of icons. Each Icon has a class of "test" and then "test" + [some rating]. The rating could be A, B, C, XX, YY. I'd like to select the group of icons and loop over them, pop off the last class (the one with the rating) and then expect that my Set of classConsts contains the class in question. I've done a bunch of research but can only find examples of interacting with each of the elements, I'm not sure what I'm doing wrong when trying to check the classes on each instead. Any help is appreciated, thanks.

The below code blows up when i call mrArray.nth saying it's not a function (sorry its a bit messy, had to change some variable names around)

test('Sers are verified', async t => {
    const IconArr = Selector('#testtable .test');
    var count = await IconArr().count;
    var myArray = await IconArr();

    const classConsts = ["testClassA", "testClassB", "testClassC", "testClassXX", "testClassYY"]

    let mySet = new Set(classConsts);

    for(let i = 1; i <= count; i++){
        mySet.has(myArray.nth(i).classNames.pop());
        await t.expect(mySet.has(i.classNames.pop())).eql(true)
    
    };
});

How to check String equality in Google Truth assertions?

Truth.assertThat(expected).matches(actual) or Truth.assertThat(expected).isEqualTo(actual) ?

The docs say that the matches() method takes in a String in the form of a regex but not sure if a string literal works as well? That's what got me confused.

Testing stackoverflow

Just testing how this website works. please ignore will be gone soon. I think this question will never meet your standard, it is not supposed to, I just want to see if I post a question can I also delete it.

How to perform white box testing on laravel Project? [closed]

I want to perform white box testing on my project"Hostel Automation System". I've knowledge of white box testing, but don't know how actually I can do this. Because I'm confused how to create a Test case for a specific function to perform white box testing on it.

Is there any software or tool to perform white box testing for laravel projects ?

Cypress test fails in GitHub Actions, but not locally in normal mode? page to load timeout

I have a couple of test in cypress, my website needs an auth authentication (I cannot remove that) ..so what I do is to use one of the different ways cypress allows you to login

cy.visit({
       url: 'https://www.bbla.dev/',
       method: 'GET',
       headers: {
         'authorization': 'basic blabla=='
       }
     })
cy.visit('/',
       {
       auth: {
         username: json.username,
         password: json.password
       }
     })

cy.visit('https://admin:admin.bla@www.bla.dev')

locally my test pass.. all of them (I have 4 suites) the problem is when I run them on my github actions ci

this is the code I'm using

  cypress-run:
runs-on: ubuntu-16.04
# Cypress Docker image with Chrome v78
# and Firefox v70 pre-installed
container: cypress/browsers:node12.13.0-chrome78-ff70
steps:
  - uses: actions/checkout@v1
  - uses: cypress-io/github-action@v2
    with:
      browser: chrome
      working-directory: ./services/bla

sometimes the first suite pass and then it timeouts; and sometimes it just timeouts from the beggining. Timeout when waiting on the page to load

Github actions logs

How to know if a sample belongs to a new probability distribution?

I am trying to see if a sample (DemMesReal) belongs to a new discrete probability distribution (DemMesDistr).

This is the code I have made, however I do not know how to continue to do it. I've already thought about bootstrap, but I only want to see if the sample belongs to the probability distribution. The probability distribution returns 12 values for each sample (the same length of the original sample).

# Define seed
set.seed(12345)
# Original Sample
DemMesReal <- c(0, 230,   0,   0,   0, 151,   0,   0, 150,   0, 150,   0)

MonthsYear <- length(DemMesReal)
# Initial parameters
unif1min <- min(DemMesReal[which(DemMesReal>0)])
unif1max <- max(DemMesReal[which(DemMesReal>0)])
pber1 <- sum(DemMesReal==0)/MonthsYear


# Distribucion de probabilidad demanda mensual estocastica
DemMesDistr <- function(n=1, unif1min = 0, unif1max = 300, 
                        pber1=0, MonthsYear=12) {

# DemandaMensual Estocastica Resultados:
DemMesProb <- matrix(NA, nrow=n, ncol=MonthsYear)

for (j in 1:n){
# MONTHLY DEMAND: Bernouilli and Discrete uniform1

for (i in 1:MonthsYear) {
  
  # Bernouilli (inflated 0 Most of the days the products are not ordered)
  ber <- rbern(n=1, prob=1-pber1) # Most of the days of the month (70%) the customer does not order the product
  
  # Uniform1
  # If bernouilli distribution gives 1 (that month, the product analysed is ordered):
  if (ber==1) {DemMonth <- rdunif(n=1, min=unif1min, max=unif1max)}  # min=0 or 1 is not a problem: we re working with approx. (if 1 and unif2max=0: NA generated)
  # If bernouilli distribution gives 0 (that day the product is not ordered)
  if (ber==0) {DemMonth <- 0}
  
  # Monthly demand in Month i
  DemMesProb[j,i] <- DemMonth
 
}
}
DemMesProb
}

set.seed(12345)
# Probabilistic function obtaining 1000 samples
DemProb <- DemMesDistr(n=1000, unif1min, unif1max, pber1)
DemMesDistrVect <- as.vector(DemProb)
hist(DemMesDistrVect, freq=FALSE, breaks=10, main="Demanda Estocástica")
hist(DemMesReal, freq=FALSE, breaks=10, main="Demanda Real")

# Same number of Zeros
sum(DemMesDistrVect==0)/sum(length(DemMesDistrVect))
sum(DemMesReal==0)/sum(length(DemMesReal)) 

I do not know what to do more than the code I've made, to know if the sample belongs to the probability distribution (I can simulate lots of samples with the distribution).

Need help trying to test function to find area of rectangle using co-ordinates

I have a Rectangle class that finds the area of a Rectangle using top-left and bottom-right coordinates. I'm having trouble writing up a test function to test the code and verify it works.

class Rectangle: # rectangle class
    # make rectangle using top left and bottom right coordinates
    def __init__(self,tl,br):
        self.tl=tl
        self.br=br
        self.width=abs(tl.x-br.x)  # width
        self.height=abs(tl.y-br.y) # height
    def area(self):
        return self.width*self.height

So far I have written this which leads to an AttributeError: 'tuple' object has no attribute 'x'

def test_rectangle():
    print("Testing rectangle class")
    rect = Rectangle((3,10),(4,8))
    actual = rect.area()
    print("Result is %d" % actual)

What can I change to the code in order to make it work?

How do you stub and test a helper that invokes a different method depending on the object class passed to it?

I've got this helper which I'm trying to write tests for in Minitest. The helper calls another method depending on the object class I'm passing as an argument, like so:

  def label_for(object)
      status = object&.status
      case object.class.name
      when "Subscription"
        class_for_subscription_status(status)
      when "Payment"
        class_for_payment_status(status)
      when "Purchase"
        class_for_purchase_status(status)
      when "Invoice"
        class_for_invoice_status(status)
      when "Ticket"
        class_for_ticket_status(status)
  end

Each individual method is already tested somewhere else, so I just need to test that if I pass a class Subscription object to label_for, it will invoke class_for_subscription_status(status) and not something else.

This is the test I've come up with, but I get NoMethodError: undefined method ``class_for_subscription_status' for #<AuxiliariesHelperTest errors.

  test "#label_for(object) should invoke the right helper if object is of class Subscription" do
    AuxiliariesHelperTest.any_instance.stubs(:label_for).with(subscriptions(:user)).returns(:class_for_subscription_status)

    assert_equal class_for_subscription_status(subscriptions(:user).status), label_for(subscriptions(:user))
  end

What am I doing wrong?

How to remove set value in the input using cypress testing?

//you set an input value:

cy.get('.ta-table-toolbar-search > .w-100').type('f9462f22{enter}'); //how to remove the set input value about using cypress testing?

How to make Enzyme or Jest wait until context work

I am passing a function to the context that is called when the component that receives this context is mounted.This function set value to parent component state.

Example is here https://codesandbox.io/s/react-jest-enzyme-forked-16s74?file=/src/App.js

In browser this work like that - first time state is {validation: {}}, and then it update to {validate: {name: 'some-name'}}.

But in test it's work only first time, and state doesn't update to {validate: {name: 'some-name'}}.

How can i fix this ?

Components
Validation

export class Validation extends React.Component {
  static childContextTypes = {
    addValidation: PropTypes.func
  };

  state = {
    validations: {}
  };

  getChildContext() {
    return {
      addValidation: (name, value) => {
        this.setState((prevState) => ({
          ...prevState,
          validations: { ...prevState.validations, [name]: value }
        }));
      }
    };
  }

  render() {
    console.log(this.state);
    return this.props.children;
  }
}

ValidationField

export class ValidationField extends React.Component {
  static contextTypes = {
    addValidation: PropTypes.func
  };

  componentDidMount() {
    const { name, validation } = this.props;
    this.context.addValidation && this.context.addValidation(name, validation);
  }

  render() {
    return <input />;
  }
}

Test

it("", () => {
    const wrapper = mount(
      <Validation>
        <ValidationField name="fieldName" validation="blabla" />
      </Validation>
    );

    expect(wrapper.state()).toBe({
      validations: {
        fieldName: "blabla"
      }
    });
  });   

How to determine the XPATH of Android UI elements

I am trying to find the xpath from android device(native application). I am able to get the xpath through UiAutomatorViewr thats work for me , But it is manually. I need that dynamically like seetest(they capture the xpath by cliking on device elements).

I used ADB shell to get some activity data, But these data are inappropriate.

Selenium with chromium based app and Docker: Chrome failed to start: crashed (chrome not reachable)

Just did some Selenium tests with Java + Maven to run them on my Chromium based application, everything was great as long as I was using them on my PC with Windows 10 but when i try to run them in Docker container get such error:

Starting ChromeDriver 2.46.628402 (536cd7adbad73a3783fdc2cab92ab2ba7ec361e1) on port 34013
Only local connections are allowed.
Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code.
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 3.657 s <<< FAILURE! - in com.me.seleniumtests.ProjectFlowTests
[ERROR] projectTest  Time elapsed: 3.625 s  <<< ERROR!
org.openqa.selenium.WebDriverException:
unknown error: Chrome failed to start: crashed
  (chrome not reachable)
  (The process started from chrome location app/app.exe is no longer running, so ChromeDriver is assuming that Chrome has crashed.)

Code:

System.setProperty("webdriver.chrome.driver", "src/main/resources/drivers/chromedriver.exe");
ChromeOptions chromeOptions = new ChromeOptions();
chromeOptions.setBinary("app/app.exe");
chromeOptions.addArguments("--no-sandbox");
chromeOptions.addArguments("--headless");
chromeOptions.addArguments("--disable-setuid-sandbox");
chromeOptions.addArguments("remote-debugging-port=9222");
ChromeDriver chromeDriver = new ChromeDriver(chromeOptions);
chromeDriver.get("https://www.google.com/");

I'm using:

  1. Docker image with Open JDK 8, Maven 3.6 and Windows Server Core from here(link). Powershell command to run tests in Docker:

    docker run -it --rm --name selenium-tests -v "$(Get-Location):C:/Src" -w C:/Src csanchez/maven:3.6-openjdk-8-windowsservercore-1809 mvn test

  2. Desktop Application written with Chromium Based Framework, Chromium 71, Chrome Driver 2.46, JUnit 5.7.0, Selenium 3.141.

Tried with adding some flags with ChromeOptions, changing Chrome Driver. Read also about using docker run with --privilege command but can't do it with Windows container I think.

Django client tests requests wrong path

I am building django REST app with mixins:

ModelViewSet(mixins.RetrieveModelMixin, mixins.CreateModelMixin)
    queryset = Model.objects.all()
    (...)

and retrieve() is visible in localhost:8000/model/<uuid> Then I add a function for counting all Model instances:

    @action(detail=False, methods=["GET"], url_path="count")
    def get_model_count(self, request):
        queryset = self.get_queryset()
        data = {"count": queryset.count()}
        return Response(status=status.HTTP_200_OK, data=data)

Then when I try to test it:

    def test_model_count(self):
        list_size = 10
        for i in range(list_size):
            response = self.create_model()
            self.assertEqual(response.status_code, 200, response.content)
        response = self.client.get(reverse("model-get-model-count"))
        self.assertEqual(response.status_code, 200, response.content)

I get this error: AssertionError: 400 != 200 : b"Invalid parameter robot_id value: 'all' is not a 'uuid'"

Interesting part is that when I use web or curl, everything works fine, I can get count correctly, I can get model details, I can create new model. I tried changing the order, so count is implemented first, before retrieve but the error still exists. Hope I provided enough of information, thanks in advance

Which test in SPSS?

I have an independent group like university department classes 1,2,3,4 and a question dependent with answer yes and no. Which test to choose? enter image description here

@DataJpaTest loads context so fails on missing beans

Been at this for a few hours now and tried various things with no avail.

I am trying to write basic @DataJpaTest classes to test my native queries etc.

The issue is the test seems to try load the entire SpringApplication context rather than the "slice" required for the tests.

reflectoring.io - Data JPA Tests

As far as I understand, @DataJpaTest should only loads up the bare minimum beans for Entity Management, Repositories and a few other basic beans.

I am overriding the "default H2 database" it uses (as I have custom native SQL quieries) so my test looks like:

@DataJpaTest
@ActiveProfiles("jpa")
@Log4j2
@AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE)
public class BrandServiceJpaDataTest {

    @Autowired
    private WebApiBrandServiceRepo brandServiceRepo;

    @Autowired
    private BrandRepo brandRepo;

    @Test
    @DisplayName("Get Brand Services from REPO")
    public void getBrandServices() {
        final List<BrandService> all = brandServiceRepo.findAll();
        log.info(all);
    }

}

Properties are simple, I was trying to use @TestContainers but would rather have this working with a basic local hosted DB:

#============ DATABASE ===========#
spring.datasource.url=jdbc:mysql://127.0.0.1:3306/aspwtest
spring.datasource.username=dev
spring.datasource.password=password
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.jpa.database-platform=org.hibernate.dialect.MySQL8Dialect
spring.jpa.show-sql=true
        
# Hibernate ddl auto (create, create-drop, validate, update)
spring.jpa.hibernate.ddl-auto = create-drop

When running the test, I can see in the logs the application context being loaded up with and leading to missing bean errors as it seems to be trying to load far too much of the context (i.e. missing RestTemplate/Validation beans etc.).

Description:

Field validator in ms.allstar.wallet.webserver.patching.EntityPatcher required a bean of type 'javax.validation.ValidatorFactory' that could not be found.

The injection point has the following annotations:
    - @org.springframework.beans.factory.annotation.Autowired(required=true)


Action:

Consider defining a bean of type 'javax.validation.ValidatorFactory' in your configuration.

Is there some form of config I am missing?

I know I could probably go around annotating some beans like @ActiveProfile("jpadatatest") but that feels very hacky and would sort of defeat the purpose of @DataJpaTest existing.

tests working correctly but throwing error testing-library in React

Warning: An update to Form inside a test was not wrapped in act(...).

test('should watch input correctly', async () => {
  const { container } = render(<Form />);
  const password = container.querySelector("input[name='password']");
  const repeatPassword = container.querySelector(
    "input[name='repeatPassword']",
  );

  await act(async () => {
    fireEvent.input(password, {
      target: {
        value: 'Test2020.1',
      },
    });
  });

  await act(async () => {
    fireEvent.input(repeatPassword, {
      target: {
        value: 'Test2020.1',
      },
    });
  });

  expect(password.value).toEqual('Test2020.1');
  expect(repeatPassword.value).toEqual('Test2020.1');
});

I been reading a lot at this known issue but I cant find a solution. Im using react-hook-form to take the values.

How to run Instrumented test using different contexts?

I want to test my chat app to send and receive messages using Android instrumented test framework.

My application's main instance need Context instance for initialisation and authentication, so one context one client.

Is there any testing technique that will allow to have two different application contexts, maybe with two devices?

Please don't suggest Robolectric, so shouldn't use it, because my app contains native libraries that are not compatible with it.

How can I test the BeforeInstallPromptEvent in Jasmine/Angular?

I'm writing a test for a dialog component, that handles the PWA install event prompted manually when a function is called. It will then resolve the promise coming back from the BeforeInstallPromptEvent.userChoice property.

However when I try to test this function I get an error from jasmine:

'userChoice.then is not a function'

Which I understand since I tried spying on that with a spyOn to then check the result.

I tried a few other things with Promises, since the userChoice returns a resolved promise (https://developer.mozilla.org/en-US/docs/Web/API/BeforeInstallPromptEvent) but nothing has worked so far.

This is the function I am trying to test. The BeforeInstallPromptEvent is stored in the installEvent object that gets passed down to the component from the app.ts and then stored in data.

this.data.installEvent.userChoice
            .then((choiceResult) => {
                if (choiceResult.outcome === 'accepted') {
                    doSomething();

In my test spec I created a mockData object with the following:

const mockData = {
  installEvent: {
    prompt: () => {},
    userChoice: Promise.resolve().then(() => {
      return { choiceResult: { outcome: "accepted" } };
    }),
  },
};

My test spec so far (after many different tries) is either this:

it("should do something after user accepted", fakeAsync(() => {
 mockData.installEvent.userChoice.then((result) => {
      return result.choiceResult.outcome;
//this test returns true
      expect(result.choiceResult.outcome).toEqual("accepted");
    }); 
flush();
//but the function gets never called
expect(component.doSomething).toHaveBeenCalled();
});

Or with the spyOn function on the userChoice that produces the error:

it("should do something after user accepted", () => {
spyOn(mockData.installEvent, "userChoice").and.returnValue("accepted");

expect(mockData.installEvent.userChoice).toBe("accepted)
expect(component.doSomething).toHaveBeenCalled();
});

With that I managed to already test that the data.installEvent.prompt function is getting called.

But I don't know how to test that the userChoice Promise resolves to "accepted" and then triggers the if condition.

Please help me. Since I am a beginner regarding testing (in Angular) it would be nice to get an explanation of the method behind testing such Promises.

Thank you very much in advance!

jeudi 29 octobre 2020

How to click to image in Appium Selenium mobile driver by iOSNsPredicateString

I try to use it, by without any results

       String selector= "type == 'XCUIElementTypeImage' AND name == 'profilePlaceholder'";
        $(MobileBy.iOSNsPredicateString(selector)).click();

How to use In-app FLEXIBLE anf IMMEDIATE update in the same Actitivty

I am trying to add in-app updates to my android application I want to add both FLEXIBLE and IMMEDIATE update to the app but the problem is I am not able to sort out where to put the registeruser code and other code

I am calling this function in onCreate of my activity.

        Task<AppUpdateInfo> appUpdateInfoTask = appUpdateManager.getAppUpdateInfo();
        appUpdateInfoTask.addOnSuccessListener(appUpdateInfo -> {
            if (appUpdateInfo.updateAvailability() == UpdateAvailability.UPDATE_AVAILABLE) {
                if (appUpdateInfo.isUpdateTypeAllowed(IMMEDIATE)) {
                    try {
                        appUpdateManager.startUpdateFlowForResult(appUpdateInfo, IMMEDIATE, this, MY_REQUEST_CODE);
                    } catch (IntentSender.SendIntentException e) {
                        e.printStackTrace();
                    }
                } else if (appUpdateInfo.isUpdateTypeAllowed(AppUpdateType.FLEXIBLE)) {
                    try {
                        appUpdateManager.startUpdateFlowForResult(appUpdateInfo, AppUpdateType.FLEXIBLE, this, MY_REQUEST_CODE);
                    } catch (IntentSender.SendIntentException e) {
                        e.printStackTrace();
                    }
                }
            }
        });
    }

RSpec HTTP request tests always passing

So I am having some trouble getting some RSpec tests to FAIL. No matter what I try they are always passing, well, one of the tests works correctly and I verified it can fail if I modify the code it is testing.

I am trying to test the content of JSON responses made to an external API to make sure my controller is sorting these return JSON objects correctly.

Am I missing something here? Why cant I get these to fail?

RSpec.describe 'Posts', type: :request do
  describe 'ping' do
      it 'returns status 200' do # This text block works correctly.
        get "/api/ping"
        expect(response.content_type).to eq("application/json; charset=utf-8")
        expect(response).to have_http_status(200)
      end
  end

  describe 'get /api/posts' do # all tests below always pass, no matter what.
    it 'should return an error json if no tag is given' do
      get "/api/posts" 
      expect(response.content_type).to eq("application/json; charset=utf-8")
      expect(response.body).to eq("{\"error\":\"The tag parameter is required\"}")
    end

    it 'should return a json of posts' do
      get "/api/posts?tags=tech" do
        expect(body_as_json.keys).to match_array(["id", "authorid", "likes", "popularity", "reads", "tags"])
      end
    end

    it 'should sort the posts by id, in ascending order when no sort order is specified' do
      get "/api/posts?tags=tech" do
        expect(JSON.parse(response.body['posts'][0]['id'].value)).to_be(1)
        expect(JSON.parse(response.body['posts'][-1]['id'].value)).to_be(99)
      end
    end

    it 'should sort the posts by id, in descending order when a descending order is specified' do
      get "/api/posts?tags=tech&direction=desc" do
        expect(JSON.parse(response.body['posts'][0]['id'].value)).to_be(99)
        expect(JSON.parse(response.body['posts'][-1]['id'].value)).to_be(1)
      end
    end
end

Within the get block of the 'should return an error json if no tag is given' do block I even tried expect(4).to eq(5) and even THIS passed!

Help much appreciated here!

Unable to run local version of testcafe via npm link

Whenever I try to run a locally installed version of testcafe, I get the error Cannot find module '../testcafe/lib/cli'. Steps I've taken:

  1. git clone <testcafe>, then cd to testcafe
  2. npm install
  3. npm link
  4. cd to repo where my automated tests reside
  5. npm link testcafe
  6. Run testcafe <browser>

Results in

internal/modules/cjs/loader.js:796
    throw err;
    ^

Error: Cannot find module '/Users/rcooper/testcafe/lib/cli'
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:793:17)
    at Function.Module._load (internal/modules/cjs/loader.js:686:27)
    at Function.Module.runMain (internal/modules/cjs/loader.js:1043:10)
    at internal/main/run_main_module.js:17:11 {
  code: 'MODULE_NOT_FOUND',
  requireStack: []
}

Is there something else I need to do, to run a local version of this library?

Error while installing appium ' ENOENT: no such file or directory, chmod '/usr/local/Cellar/node/15.0.1/lib/node_modules/appium/build/lib/main.js'

whenever I use npm install -g appium command on terminal to install appium ,I am getting the following errors error displayed on mac

DetoxRuntimeError: Detox instance has not been initialized - Error on attempt to call detox.init()

I am unable to get detox working on a project, however it works fine on CI and on my colleagues computer. I am also able to get detox working on a fresh project, so I am unsure what the issue could be. We are all using the same build scripts, same version of node, however I am unable to figure out where the issue lies or how to diagnose it further.

DetoxRuntimeError: Detox instance has not been initialized
    HINT: There was an error on attempt to call detox.init()
      4 | describe('', () => {
      5 |   beforeEach(async () => {
    > 6 |     await device.reloadReactNative();
        |                  ^
      7 |   });
      8 |
      9 |   afterAll(async () => {
      at MissingDetox.throwError (../node_modules/detox/src/utils/MissingDetox.js:67:11)

How to assert test bool functions and switch statements

I want to be able to assert test all of my functions to ensure that they are working correctly, however I'm having difficulty in figuring out how to do this for (basically any of my functions but mainly) bool functions and switch statements. One method that was suggested was to create a histogram for the probability functions but implementing this has proven difficult, so any guidance would be appreciated.

#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#include <assert.h>

#define ROWLENGTH 30 + BUFF
#define COLLENGTH 80 + BUFF
#define BUFF 1
#define GENERATIONS 1000
#define G 250
#define L (10 * G)

enum { space = 0, tree = 1, fire = 2 };
typedef enum { false, true } bool;

void test(void);
void newGen(int a[][COLLENGTH]);
void copyArray(int a[][COLLENGTH], int b[][COLLENGTH]);
int stateSwitch(int a, bool b);
bool neighCheck(int i, int j, int a[][COLLENGTH]);
bool probabTree(void);
bool probabFire(void);
void replace(int a[][COLLENGTH]);

int main(void)
{
    int forest[ROWLENGTH][COLLENGTH] = { {0}, {0} };
    int i;

    test();

    srand(time(NULL));

    for (i = 1; i <= GENERATIONS; i++) {
        printf("\nGen %d:\n", i);
        newGen(forest);
    }
    test();
    return 0;
}

void test(void)
{
    int testArr[ROWLENGTH][COLLENGTH] = { {0}, {0} };
    int testArr2[ROWLENGTH][COLLENGTH];
    int testArr3[ROWLENGTH][COLLENGTH];

    int i, j;

    for (i = 1; i < ROWLENGTH; i++) {
        for (j = 1; j < COLLENGTH; j++) {
            assert(neighCheck(i, j, testArr) == true ||
                neighCheck(i, j, testArr) == false);
        }
    }

    copyArray(testArr, testArr2);
    copyArray(testArr2, testArr3);
    assert(testArr2[5][5] == testArr3[5][5]);
}

void newGen(int arr[][COLLENGTH])
{
    int newForest[ROWLENGTH][COLLENGTH] = { {0}, {0} };
    int i, j, currCell;

    for (i = 1; i < ROWLENGTH; i++) {
        for (j = 1; j < COLLENGTH; j++) {
            currCell = arr[i][j];

            /* Adds the state returned from the sS function to the array*/
            newForest[i][j] = stateSwitch(currCell, neighCheck(i, j, arr));
        }
    }
    replace(newForest);

    copyArray(newForest, arr);
}

/* Checks neighbours to see if they are or should be on fire */
bool neighCheck(int i, int j, int a[][COLLENGTH])
{
    bool neighbourOnFire;
    int x, y, neighX, neighY, curreNeigh;

    /* Bool set up to change after neighbours looped*/
    neighbourOnFire = false;
    /* The neighbours -looping from -1 -> 1 to get index of each neighbour*/
    for (x = -1; x < 2; x++) {
        for (y = -1; y < 2; y++) {
            /* Disregards current (middle) cell*/
            if (!((x == 0) && (y == 0))) {
                /* Get indexes of the neighbour we're looking at */
                neighX = i + x;
                neighY = j + y;
                /* Checks for edges*/
                if (neighX >= 0 && neighY >= 0 && neighX < ROWLENGTH &&
                    neighY < COLLENGTH) {
                    /* Get the neighbour using the indexes above */
                    curreNeigh = a[neighX][neighY];

                    if (curreNeigh == fire) {
                        neighbourOnFire = true;
                    }
                }
            }
        }
    }
    return neighbourOnFire;
}

/* Function that determines the state of the array's cells */
int stateSwitch(int a, bool b)
{
    int newState;

    newState = space;

    switch (a) {
    case space:
        newState = probabTree();
        break;
    case tree:
        if (b || probabFire()) {
            newState = fire;
        }
        else {
            newState = tree;
        }
        break;
    case fire:
        newState = space;
        break;
    }
    return newState;
}

bool probabTree(void)
{
    if ((rand() % G) == 0) {
        return true;
    }
    else {
        return false;
    }
}

bool probabFire(void)
{
    if ((rand() % L) == 0) {
        return true;
    }
    else {
        return false;
    }
}
/* Replaces the states, 0, 1, 2, with characters,  , @, * */
void replace(int a[][COLLENGTH])
{
    int i, j;

    for (i = 1; i < ROWLENGTH; i++) {
        for (j = 1; j < COLLENGTH; j++) {
            if (a[i][j] == space) {
                printf("%c", ' ');
            }
            if (a[i][j] == tree) {
                printf("%c", '@');
            }
            if (a[i][j] == fire) {
                printf("%c", '*');
            }
            if (j % COLLENGTH - BUFF == 0) {
                printf("\n");
            }
        }
    }
}

/* To copy one 2D array into another */
void copyArray(int a[][COLLENGTH], int b[][COLLENGTH])
{
    int i, j;
    for (i = 0; i < ROWLENGTH; i++) {
        for (j = 0; j < COLLENGTH; j++) {
            b[i][j] = a[i][j];
        }
    }
}

I get "t.openWindow is not a function" errors when I use TestCafe Window Management methods

The process for writing multiple windows tests described in TestCafe documentation seems pretty straightforward: await t.openWindow('https://url.com/addnewproperty') or even const initialWindow = await t.getCurrentWindow()

should do it. However, every time I use any of the Window Management methods I get the errors:

TypeError: t.openWindow is not a function and cannot do anything about it.

Does anyone know what am I doing wrong and how to solve the issue?

TestCafe version 1.9.4

How to run development Node.js server in Codeship in one codeship step and get access to it from next parallel steps

We have Vue.js application and we have many Puppeteer E2E tests which we want to run in parallel using Codeship.

The problem is that we want to split tests to parallels but we want to avoid running development Vue.js server for each parallel process. We want to have it run before parallel steps and have access to it in parallel steps.

How is it possible to share network to get access to http://localhost:8080 from the step which runs development server?

Here is an example of tests we would like to have

- name: serve_app
  command: yarn serve # Run the development server and wait until it will be runned
- name: tests
  type: parallel
  steps:
    - name: acceptance_tests_1 # use shared network (localhost:8080) from serve_app step
      service: browser
      command: yarn test:acceptance:1
    - name: acceptance_tests_2 # use shared network (localhost:8080) from serve_app step
      service: browser
      command: yarn test:acceptance:2

Angular Jest Test Directive with @HostLinener('window:scroll')

Problem: I want to test a directive which is supposed to change an element's styling after an 'window:scroll' event but don't know how to trigger the event

The Directive:

@Directive({
    selector: '[directiveSelector]',
})
export class Directive {
    private breakpointOffset: number;
    private fixed = false;

    constructor(
        private readonly elementRef: ElementRef,
    ) {}

    ngAfterViewInit() {
            //fix the element when scrollTop reaches past element's offsetTop
            this.breakpointOffset = this.elementRef.nativeElement.offsetTop;
    }

    @HostListener('window:scroll')
    addScrollTrigger() {
        if (!this.fixed && window.pageYOffset > this.breakpointOffset) {
            this.elementRef.nativeElement.setAttribute(
                'style',
                'position: fixed;
            );
            this.fixed = true;
        }

        if (this.fixed && window.pageYOffset <= this.breakpointOffset) {
            this.elementRef.nativeElement.setAttribute('style', '');
            this.fixed = false;
        }
    }
}

What I tried:

@Component({
    template: ` <div #container directive></div> `,
})
class TestComponent {
    @ViewChild('container') container: ElementRef;
}

describe('Directive', () => {
    let fixture: ComponentFixture<TestComponent>;
    let component: TestComponent;

    beforeEach(async(() => {
        TestBed.configureTestingModule({
            declarations: [TestComponent, Directive],
        }).compileComponents();

        fixture = TestBed.createComponent(TestComponent);
        component = fixture.componentInstance;
    }));

    describe('addScrollTrigger', () => {
        it('sets fixed style', () => {
            //Trigger scroll event?

            expect(component.container.nativeElement.styles).toEqual(....);
        });
    });
});

Within my test I tried to create a component with a template that has a div with that directive. What I am not able to do is trigger the scroll-event because I have to direct access to the directive and can not call the "addScrollTrigger"-function, can I?

Spy on window function with testcafe

I want to test with testcafe if a function on a window object is executed with certain parameters. Is it possible with Testcafe?

The function call looks like this:

window.myObject.myFunction({customObject: true});

Test onTap behaviour

I would like to test that when I am tapping a button, the user is routed to the next page. This works in the UI. I introduced the test but I get the following error: The following TestFailure object was thrown running a test: Expected: exactly one matching node in the widget tree Actual: _WidgetTypeFinder:<zero widgets with type "MyNextView" (ignoring offstage widgets)> Which: means none were found but one was expected

What am I doing wrong?

testWidgets('Button tap routes to next page', (WidgetTester tester) async {
    final button = createButton();
    await tester.pumpWidget(button);
    
    await tester.tap(find.byWidget(button));
    expect(find.byType(MyNextView), findsOneWidget);
  });

Vue3 Vite and tests with jest has no template compiler

i use a codebase made with Vue3 Vite, but i cannot find a way to run a simple Jest test importing a component. This works fine in an app create with Vue-cli, but i cannot find a way to make Jest work in a Vue-Vite app with Vue3. Here is the error I encounter

 FAIL  docs/homepage.spec.js
  ● Test suite failed to run



    Vue packages version mismatch:

    - vue@3.0.0 (/var/www/html/my_project_vue3_vite/node_modules/vue/index.js)
    - vue-template-compiler@2.6.12 (/var/www/html/my_project_vue3_vite/node_modules/vue-template-compiler/package.json)

but vue-template-compiler does not exist yet in version 3, so no way to make it work today.

i found this example repo, but it uses Mocha and Chai, not Jest, and i cannot find a way to adapt it. https://github.com/JessicaSachs/vite-component-test-starter

here is my package json :

{
  "name": "front-operation",
  "version": "0.1.0",
  "scripts": {
    "dev": "vite",
    "build": "vite build",
    "start": "npm run dev && wait-on http://localhost:3000",
    "test": "jest --coverage",
    "tdd:vite": "vite --config test/vite.config.test.js",
    "tdd": "aria-vue -w --path test-ui --script test/plugins.js",
    "headless": "aria-vue --offline -w -H --script test/plugins.js ",
    "test:watch": "jest --watch",
    "test:ci": "jest --ci",
    "lint": "prettier --write \"src/**/*.{js,jsx,ts,tsx,md,html,css,scss}\"",
    "e2e": "jest",
    "format:check": "prettier --list-different \"src/{app,environments,assets}/**/*{.ts,.js,.json,.css,.scss}\"",
    "format:all": "prettier --write \"src/**/*.{js,jsx,ts,tsx,md,html,css,scss}\"",
    "doc": "compodoc -p tsconfig.json src ",
    "mock:server": "json-server --port 8000 --watch ./src/mocks/db.json --routes ./src/mocks/routes.json",
    "start:proxy": "vite serve --proxy-config proxy.conf.json",
    "start:proxymock": "concurrently --kill-others \"npm run mock:server\" \"yarn start:proxy\""
  },
  "dependencies": {
    "@tailwindcss/typography": "^0.2.0",
    "@tailwindcss/ui": "^0.6.2",
    "@vue/compiler-sfc": "3.0.0",
    "axios": "^0.21.0",
    "core-js": "^3.6.5",
    "jest": "^26.6.1",
    "moment": "^2.29.1",
    "qrcode.vue": "^1.7.0",
    "tailwindcss": "^1.8.12",
    "vue": "3.0.0",
    "vue-template-compiler": "^2.6.12"
  },
  "devDependencies": {
    "@babel/preset-typescript": "^7.12.1",
    "@types/jest": "^24.0.19",
    "@types/koa-router": "^7.4.1",
    "@typescript-eslint/eslint-plugin": "^4.4.0",
    "@typescript-eslint/parser": "^4.4.0",
    "@vue/babel-preset-app": "^4.5.8",
    "@vue/eslint-config-standard": "^5.1.2",
    "@vue/eslint-config-typescript": "^7.0.0",
    "@vue/test-utils": "2.0.0-beta.5",
    "aria-mocha": "0.5.2",
    "aria-vue": "0.4.1",
    "autoprefixer": "^6.7.7",
    "babel-core": "^7.0.0-bridge.0",
    "compodoc": "0.0.41",
    "eslint": "^7.10.0",
    "eslint-plugin-vue": "^7.0.1",
    "faker": "^5.1.0",
    "husky": "^4.3.0",
    "jest": "^26.6.1",
    "jest-cli": "^26.6.1",
    "jest-config": "^26.6.1",
    "jest-css-modules": "^2.1.0",
    "jest-each": "^26.6.1",
    "jest-puppeteer": "^4.4.0",
    "jest-serializer-vue": "^2.0.2",
    "jest-vue-preprocessor": "^1.7.1",
    "json-server": "^0.16.2",
    "lint-staged": "^10.5.0",
    "prettier": "^2.1.2",
    "puppeteer": "^5.4.1",
    "ts-jest": "^26.4.3",
    "ts-node": "^9.0.0",
    "typescript": "^4.0.3",
    "vite": "^1.0.0-rc.4",
    "vue-gue": "^0.2.0",
    "vue-jest": "^3.0.7",
    "wait-on": "^5.2.0"
  },
  "husky": {
    "hooks": {
      "pre-commit": "lint-staged"
    }
  },
  "lint-staged": {
    "src/**/*.{js,jsx,ts,tsx,md,html,css,scss}": [
      "prettier --write",
      "git add"
    ],
    "*.js": [
      "prettier --write"
    ]
  }
}

Ruby on Rails Testing - ActiveRecord::RecordNotUnique: PG::UniqueViolation: ERROR: duplicate key value violates unique constraint "xxx_xxxx_xxxx_pkey"

I have the following Models. I am using Rails 6.0.3.2 (Ruby 2.7.1p83) and using Minitest as Testing Engine

class AuthContentType < ApplicationRecord
  has_many :auth_permissions
end
class AuthGroup < ApplicationRecord
  has_many :auth_group_permissions
  has_many :auth_permissions, through: :auth_group_permissions
end
class AuthGroupPermission < ApplicationRecord
  belongs_to :auth_group
  belongs_to :auth_permission
end
class AuthPermission < ApplicationRecord
  belongs_to :auth_content_type
  has_many :auth_user_permissions
  has_many :users, through: :auth_user_permissions
  has_many :auth_group_permissions
  has_many :auth_groups, through: :auth_group_permissions
end

When I run > rails test test\controllers\XXXX_controller.rb -n test_should_get_home I get the following Error:

Run options: -b -n test_should_get_home --seed 27312                                                                                                                                         
                                                                                                                                                                                             
# Running:                                                                                                                                                                                   
                                                                                                                                                                                             
E                                                                                                                                                                                            
                                                                                                                                                                                             
Error:                                                                                                                                                                                       
GdControllerTest#test_should_get_home:                                                                                                                                                       
ActiveRecord::RecordNotUnique: PG::UniqueViolation: ERROR:  duplicate key value violates unique constraint "auth_content_types_pkey"                                                         
DETAIL:  Key (id)=(980190962) already exists.                                                                                                                                                
                                                                                                                                                                                             
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/connection_adapters/postgresql/database_statements.rb:92:in `exec'                                         
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/connection_adapters/postgresql/database_statements.rb:92:in `block (2 levels) in execute'                  
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activesupport-6.0.3.2/lib/active_support/dependencies/interlock.rb:48:in `block in permit_concurrent_loads'                                       
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activesupport-6.0.3.2/lib/active_support/concurrency/share_lock.rb:187:in `yield_shares'                                                          
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activesupport-6.0.3.2/lib/active_support/dependencies/interlock.rb:47:in `permit_concurrent_loads'                                                
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/connection_adapters/postgresql/database_statements.rb:91:in `block in execute'                             
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/connection_adapters/abstract_adapter.rb:722:in `block (2 levels) in log'                                   
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activesupport-6.0.3.2/lib/active_support/concurrency/load_interlock_aware_monitor.rb:26:in `block (2 levels) in synchronize'                      
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activesupport-6.0.3.2/lib/active_support/concurrency/load_interlock_aware_monitor.rb:25:in `handle_interrupt'                                     
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activesupport-6.0.3.2/lib/active_support/concurrency/load_interlock_aware_monitor.rb:25:in `block in synchronize'                                 
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activesupport-6.0.3.2/lib/active_support/concurrency/load_interlock_aware_monitor.rb:21:in `handle_interrupt'                                     
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activesupport-6.0.3.2/lib/active_support/concurrency/load_interlock_aware_monitor.rb:21:in `synchronize'                                          
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/connection_adapters/abstract_adapter.rb:721:in `block in log'                                              
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activesupport-6.0.3.2/lib/active_support/notifications/instrumenter.rb:24:in `instrument'                                                         
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/connection_adapters/abstract_adapter.rb:712:in `log'                                                       
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/connection_adapters/postgresql/database_statements.rb:90:in `execute'                                      
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/connection_adapters/postgresql/database_statements.rb:170:in `execute_batch'                               
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/connection_adapters/abstract/database_statements.rb:370:in `block (3 levels) in insert_fixtures_set'       
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/connection_adapters/abstract/database_statements.rb:280:in `block in transaction'                          
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/connection_adapters/abstract/transaction.rb:280:in `block in within_new_transaction'                       
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activesupport-6.0.3.2/lib/active_support/concurrency/load_interlock_aware_monitor.rb:26:in `block (2 levels) in synchronize'                      
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activesupport-6.0.3.2/lib/active_support/concurrency/load_interlock_aware_monitor.rb:25:in `handle_interrupt'                                     
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activesupport-6.0.3.2/lib/active_support/concurrency/load_interlock_aware_monitor.rb:25:in `block in synchronize'                                 
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activesupport-6.0.3.2/lib/active_support/concurrency/load_interlock_aware_monitor.rb:21:in `handle_interrupt'                                     
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activesupport-6.0.3.2/lib/active_support/concurrency/load_interlock_aware_monitor.rb:21:in `synchronize'                                          
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/connection_adapters/abstract/transaction.rb:278:in `within_new_transaction'                                
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/connection_adapters/abstract/database_statements.rb:280:in `transaction'                                   
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/connection_adapters/abstract/database_statements.rb:369:in `block (2 levels) in insert_fixtures_set'       
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/connection_adapters/postgresql/referential_integrity.rb:19:in `disable_referential_integrity'              
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/connection_adapters/abstract/database_statements.rb:368:in `block in insert_fixtures_set'                  
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/connection_adapters/abstract/database_statements.rb:480:in `with_multi_statements'                         
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/connection_adapters/abstract/database_statements.rb:367:in `insert_fixtures_set'                           
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/fixtures.rb:608:in `block in insert'                                                                       
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/fixtures.rb:599:in `each'                                                                                  
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/fixtures.rb:599:in `insert'                                                                                
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/fixtures.rb:585:in `read_and_insert'                                                                       
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/fixtures.rb:545:in `create_fixtures'                                                                       
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/test_fixtures.rb:205:in `load_fixtures'                                                                    
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/test_fixtures.rb:118:in `setup_fixtures'                                                                   
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activerecord-6.0.3.2/lib/active_record/test_fixtures.rb:8:in `before_setup'                                                                       
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activesupport-6.0.3.2/lib/active_support/testing/setup_and_teardown.rb:40:in `before_setup'                                                       
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/actionpack-6.0.3.2/lib/action_dispatch/testing/integration.rb:322:in `before_setup'                                                               
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/activejob-6.0.3.2/lib/active_job/test_helper.rb:45:in `before_setup'                                                                              
    C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/railties-6.0.3.2/lib/rails/test_help.rb:48:in `before_setup'                                                                                      
                                                                                                                                                                                             
                                                                                                                                                                                             
rails test test/controllers/gd_controller_test.rb:4                                                                                                                                          
                                                                                                                                                                                             
.                                                                                                                                                                                            
                                                                                                                                                                                             
Finished in 2.872201s, 0.6963 runs/s, 0.3482 assertions/s.                                                                                                                                   
2 runs, 1 assertions, 0 failures, 1 errors, 0 skips 

Could I be helped on this? Having been trying to perform some solutions suggested. But it doesn't help me.

Solution:1 (Ran this in the console)

    ActiveRecord::Base.connection.tables.each do |t|
      ActiveRecord::Base.connection.reset_pk_sequence!(t)
    end

C# unit testing csv

I need to test quantity of headers in csv. Below is my approach to solve this problem. I created two methods first counts quantity of headers in csv file and the second counts number of commas in first row of file. If they are equal test is positive, if not negative. I use shouldly and it works fine, but I need to find different approach to this matter using fixie. Below is code:

using System;
using System.Collections.Generic;
using System.Text;
using LumenWorks.Framework.IO.Csv;
using System.IO;
using System.Linq;

namespace Test
{
    public class CsvParser
    {
        public int CountCsvHeaders()
        {
            using (var reader = new System.IO.StreamReader(@"csv file", Encoding.Default))
            {
                Char quotingCharacter = '\0'; // no quoting-character;
                Char escapeCharacter = quotingCharacter;
                Char delimiter = ',';
                int count = 0;
                using (var csv = new CsvReader(reader, true, delimiter, quotingCharacter, escapeCharacter, '\0', ValueTrimmingOptions.All))
                {
                    
                    string[] headers = csv.GetFieldHeaders();

                    foreach (string head in headers)
                    {
                        count = count + 1;
                    }
                }
                return count;
            }
        }
        public int Count()
        {
            string line = File.ReadLines(@"csv file").First();
            int count = line.Split(',').Length;
            return count;
        }
       
    }
}

using System;
using System.Collections.Generic;
using System.Text;
using Shouldly;
using Fixie;
using NFluent;


namespace Test
{
    [TestClass]
    public class SimpleTests
    {
        
        [Test]
        public void should_csv_columns_quantity()
        {
            var parser = new CsvParser();
            int ColumnsQuantity = parser.CountCsvHeaders();
            int Count = parser.Count();
            ColumnsQuantity.ShouldBe(Count);
        }
       
        
    }
}

Test with same action but different contexts

I'm working in a company as a QA intern, we do end to end testing with browser automation.

We have lots of tests scenarios that looks like that : (we use cucumber)

Given I am logged in
When I go to my address book
Then I can see my contacts

but recently we had few bugs depending on what kind of account we are logged in so in order to tests this edges case I'm thinking of doing something like that:

Given I am logged in as with in as a project manager
When I go to my address book
Then I can see my contacts

And do that for every kind of account (project manager, commercial, etc ...)

But I'm asking about the interest of doing that .. For every user the outcome should be the same, they should all see their contacts. (even if due to a bug it was not the case) if I start doing that way, I would be legit to have test like

Given I am logged in with a german account 

and same with french, english etc...

but it would lead to an explosion of the number of tests

So do you guys test those edge case ?
If yes, how to do it efficiently ?

Cmd.exe exited with code '5' on Azure DevOps when multiple python test scripts runs in one go

I'm trying to execute multiple python scripts within Azure DevOps pipeline but the "##[error]Cmd.exe exited with code '5'." is returned with no identification of any testcase apparently.

In the "execute script" step of the pipeline, the following command has been written:

 "C:\Program Files\Python\Python38\python.exe"  -m pytest $(System.DefaultWorkingDirectory)\autotest_main.py -v --junitxml=test-results.xml --html=report.html --self-contained-html

and this works fine untile the script "autotest_main.py" contains just the suite with the test cases. But I need to organize a lot of test cases, so the starting point is the autotest.py script that is written as follows:

import os

os.system('test_Preconditions.py')
os.system('test_Suite1.py')
os.system('test_Suite2.py')
os.system('test_Suite3.py')
os.system('test_Suite4.py')

I also used the test_ notation to make the testcases easy to be identified. Each of the test_Suite1.py file contains than the unit tests in conjunction with selenium webdriver to test the frontend of a web application. When I run the script on the local machine everything works fine but when I run it using azure devops the "##[error]Cmd.exe exited with code '5'." is returned. Here the log:

##[section]Starting: execute script
==============================================================================
Task         : Command line
Description  : Run a command line script using Bash on Linux and macOS and cmd.exe on Windows
Version      : 2.176.1
Author       : Microsoft Corporation
Help         : https://docs.microsoft.com/azure/devops/pipelines/tasks/utility/command-line
==============================================================================
Generating script.
Script contents:
"C:\Program Files\Python\Python38\python.exe"  -m pytest C:\agent\_work\12\s\autotest_main.py -v --junitxml=test-results.xml --html=report.html --self-contained-html
========================== Starting Command Output ===========================
##[command]"C:\Windows\system32\cmd.exe" /D /E:ON /V:OFF /S /C "CALL "C:\agent\_work\_temp\b653526f-53a0-4ac7-8a22-50698ed4ba58.cmd""
============================= test session starts =============================
platform win32 -- Python 3.8.3, pytest-6.1.0, py-1.9.0, pluggy-0.13.1 -- C:\Program Files\Python\Python38\python.exe
cachedir: .pytest_cache
metadata: {'Python': '3.8.3', 'Platform': 'Windows-10-10.0.14393-SP0', 'Packages': {'pytest': '6.1.0', 'py': '1.9.0', 'pluggy': '0.13.1'}, 'Plugins': {'html': '2.1.1', 'metadata': '1.10.0', 'nunit': '0.6.0'}}
rootdir: C:\agent\_work\12\s
plugins: html-2.1.1, metadata-1.10.0, nunit-0.6.0
collecting ... collected 0 items
 
---------- generated xml file: C:\agent\_work\12\s\test-results.xml -----------
--------- generated html file: file://C:\agent\_work\12\s\report.html ---------
============================ no tests ran in 0.32s ============================
##[error]Cmd.exe exited with code '5'.
##[section]Finishing: execute script

Any suggestion to fix this problem? I want to possibly prevent to write a lot of execute script steps within the Azure Pipeline if possible.

PDF Validation - Manual Testimg

I am a manual tester. I have a test some reports in pdf format (Verify the header, footer, font style,color). can you suggest how to identify them in pdf file and validate it

Unit Test (Fixie) for csv parser

I need to test quantity of headers in csv. I am able to count quantity of headers in csv, but I need to test what will happen when headers names are duplicated. Here is what I have so far:

public static int CountCsvHeaders()
        {
            using (var reader = new System.IO.StreamReader(@"csv file", Encoding.Default))
            {
                Char quotingCharacter = '\0'; // no quoting-character;
                Char escapeCharacter = quotingCharacter;
                Char delimiter = ',';
                int count = 0;
                using (var csv = new CsvReader(reader, true, delimiter, quotingCharacter, escapeCharacter, '\0', ValueTrimmingOptions.All))
                {
                    //while (csv.ReadNextRecord())
                    //{
                    string[] headers = csv.GetFieldHeaders();

                    //}
                    foreach (string head in headers)
                    {
                        //Console.WriteLine(head);
                        count = count + 1;
                    }
                }
                return count;

            }

        }

Instrumented tests stuck on "Instantiating tests..." after androidx.test library update

I noticed that after I updated my test libraries from:

androidx.test:runner:1.2.0
androidx.test:rules:1.2.0

to

androidx.test:runner:1.3.0
androidx.test:rules:1.3.0

my instrumented tests got stuck on "Instantiating tests..." and eventually fail. Everything works fine when I use 1.2.0.

So, is this a bug in a library version or am I missing something with this update?

VCR gem - Can I store my response data in a separate JSON file?

Using the VCR gem, responses are saved as a large string inside the YAML cassette file. Like this:

 response:
    body:
      string: '{"data":{"salesforceObjects":{"records":[{"student":{"accountId" ...

However, is it possible to save this JSON in a separate file, which is properly formatted and easier to read?

ERROR Unable to establish one or more of the specified browser connections. This can be caused by network issues or remote device failure

Jenkins runs in Linux vm without GUI testcafe@1.9.4 testcafe-reporter-xunit@2.1.0 chromium@3.0.2 node -v v14.2.0

Execute Shell: npm install chromium npm install firefox

npm install testcafe testcafe-reporter-xunit

node_modules/.bin/testcafe "firefox:node_modules/firefox:headless" tests/smokeTest.js -r xunit:res.xml

node_modules/.bin/testcafe "chromium:node_modules/chromium/lib/chromium/chrome-linux/chrome --headless --no-sandbox" tests/smokeTest.js -r xunit:res.xml

Jenkins Job: ERROR Unable to establish one or more of the specified browser connections. This can be caused by network issues or remote device failure.

Type "testcafe -h" for help. Build step 'Execute shell' marked build as failure Recording test results Finished: FAILURE

How to test firebase account deletion in the context of a React App

I'm currently working on a React/Redux/Firebase App. I'm using Firebase for user authentication. I would like write a couple of tests to check if account deletion works consistently, but I'm not sure how to tackle it:

  • tried using Cypress for that, but the gmail oauth login popup creates a new window, and Cypress cannot target it
  • tried using Cypress with email/password authentication, but that requires redirecting to an email provider and confirming the account, which makes the whole test a bit flaky
  • was thinking about using something like Jest, but not sure if it will catch all the side effects (apart from deleting the account from firebase, the app is also talking to a custom backend and in this case, deleting entries from another db)

Any suggestions?

How to test an api, making use of a django view function

I currently have a Django API mixed with some GRPC stuff. The API makes use of a Django view function


@require_GET
def django_view_fn():
  pass

within the api

class API:
  
  def api_funct(self):
     result = def django_view_fn
     return result.pb
     

everything works well as it is right now during normal operation, eg via postman. The only problem I have is, the function doesn't work during Django test/TestCase. It returns a 405 as the response, instead of the expected response from the view function.

it only works if I comment out the @require_GET on the view function decorator in the view file

mercredi 28 octobre 2020

Laravel Testing - How to run command with RefreshDatabase with seed

in Laravel I need to run db:seed after refreshdatabase

I write this code, all tables deleted and then run my migration successfully.

My question is How to run my command after seed?

class ExampleTest extends TestCase
{

    use RefreshDatabase;

    
    public function testDbSeed()
    {
        Artisan::call('db:seed');
        $resultAsText = Artisan::output();
        $this->assertTrue(true);
    }

then i need to run my CUSTOM command :

php artisan permission:sync

    public function testPermissionSync()
    {
        Artisan::call('permission:sync');
        $resultAsText = Artisan::output();
        $this->assertTrue(true);
    }

if this command running my main page opened

and below test asserted

    /**
     * A basic test example.
     *
     * @return void
     */
    public function testBasicTest()
    {

        $response = $this->get('/');

        $response->assertStatus(200);
    }

but this test not passed and assert is 403

Django RuntimeError when using models for unittests

I'm using multiple models for running tests like this one on a django-2.2 project:

class Person(models.Model):
name = models.CharField(max_length=200)

Running a simple test:

class TestRegistration(SimpleTestCase):

def setUp(self):
    self.site = admin.AdminSite()

def test_bare_registration(self):    
    self.site.register(Person)
    self.assertIsInstance(self.site._registry[Person], admin.ModelAdmin)

But always getting the same kind of RuntimeError, with this one as with other models (the model raised changes but the RuntimeError persists):

    "INSTALLED_APPS." % (module, name)
    RuntimeError: Model class tests.models.Person doesn't declare an explicit app_label and isn't in an application in INSTALLED_APPS.

I've already tried:

Here is the complete message error:

Traceback (most recent call last):
File "manage.py", line 15, in <module>
execute_from_command_line(sys.argv)
File "/home/codea/Workspace/venv3.6/lib/python3.6/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
utility.execute()
File "/home/codea/Workspace/venv3.6/lib/python3.6/site-packages/django/core/management/__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/codea/Workspace/venv3.6/lib/python3.6/site-packages/django/core/management/commands/test.py", line 23, in run_from_argv
super().run_from_argv(argv)
File "/home/codea/Workspace/venv3.6/lib/python3.6/site-packages/django/core/management/base.py", line 323, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/codea/Workspace/venv3.6/lib/python3.6/site-packages/django/core/management/base.py", line 364, in execute
output = self.handle(*args, **options)
File "/home/codea/Workspace/venv3.6/lib/python3.6/site-packages/django/core/management/commands/test.py", line 53, in handle
failures = test_runner.run_tests(test_labels)
File "/home/codea/Workspace/venv3.6/lib/python3.6/site-packages/django/test/runner.py", line 627, in run_tests
suite = self.build_suite(test_labels, extra_tests)
File "/home/codea/Workspace/venv3.6/lib/python3.6/site-packages/django/test/runner.py", line 488, in build_suite
tests = self.test_loader.loadTestsFromName(label)
File "/usr/lib/python3.6/unittest/loader.py", line 153, in loadTestsFromName
module = __import__(module_name)
File "/home/codea/Workspace/ecoclimasol/climfarm/tests/__init__.py", line 1, in <module>
from tests.models import *
File "/home/codea/Workspace/ecoclimasol/climfarm/tests/models.py", line 15, in <module>
class Person(models.Model):
File "/home/codea/Workspace/venv3.6/lib/python3.6/site-packages/django/db/models/base.py", line 111, in __new__
"INSTALLED_APPS." % (module, name)
RuntimeError: Model class tests.models.Person doesn't declare an explicit app_label and isn't in an application in INSTALLED_APPS.

I can't figure out how to PHPUnit test Ratchet/Sockets

Let me first confess that I absolutely am a novice in the field of TDD and unit tests. Currently, I am learning to work with TDD to advance myself in my career.

I am reprogramming a program I wrote a year ago using TDD. The program is not that complex. It listens to a socket (Ratchet), receives information, parses the information, and sends messages through a message bus (RabbitMQ).

I am trying to approach this via TDD so I am starting to write my test first.

My listener class has 4 methods.

  • Connect (connects to the stream via Ratchet)
  • Listen (starts listening)
  • Parse (parses received information)
  • SendMessage (send a message to the RabbitMQ bus)

The problems I encounter while writing the test first.

  • How can I mock a Ratchet connection and assert that my function can "connect"
  • How can I assert that the information received matches my expectations

I find it really hard to find information on this specific subject and I understand that my question is broad. I do not expect a complete solution or answer but hopefully, the terminology to further investigate this subject and lectures/tutorials about writing tests for methods of this kind.

How To Unit Test a Popover In React Native

I'm trying to test whether a popover functionality however I'm running into "cannot spy property". The popover I'm trying to test is in the class UserIcon.js which when pressed, displays a popover. I haven't seen any documentation or questions similar which is why I'm posting. Here's what I'm trying to do:

Test.js

   test('Popup Render When User Icon Button is Clicked', () => {
            jest.spyOn(App, 'headerRight');
            const {getByTestId} = render(<App/>);
            fireEvent.press(getByTestId('usericon'));
            expect(App.headerRight).toHaveBeenCalledTimes(1);
    });

App.js

<Stack.Navigator
                  testID = "usericon"
                initialRouteName={SPLASH_SCREEN}
                headerMode={'screen'}
                screenOptions={({navigation}) => ({
                  headerStyle: {
                    borderBottomLeftRadius: 20,
                    backgroundColor: 'white',
                    borderBottomRightRadius: 20,
                    shadowRadius: 7,
                  },
                  headerBackTitleVisible: false,
                  headerRight: () => <UserIcon  navigation={navigation} />,
                })}>

UserIcon.js

export default UserIcon = (props) => {
  const [{user, data}, dispatch] = useStateValue();
  const navigation = props.navigation;
  logoutBtnPress = () => {
    userLogout(dispatch);
    navigation.reset({index: 0, routes: [{name: LOGIN_SCREEN}]});
  };

  return (
    <PopoverController>
      {({
        openPopover,
        closePopover,
        popoverVisible,
        setPopoverAnchor,
        popoverAnchorRect,
      }) => (
        <View>
          <TouchableOpacity
            ref={setPopoverAnchor}
            onPress={() => openPopover()}>
            <IconButton icon="account-box-outline" color={'gray'} />
          </TouchableOpacity>
          <Popover
            visible={popoverVisible}
            onClose={closePopover}
            <DataTable>
              <DataTable.Row>
                <DataTable.Cell
                  onPress={() => {
                    closePopover();
                    navigation.navigate(HISTORY_SCREEN);
                  }}>
                  <Button icon="history">
                    <Text>History</Text>
                  </Button>
                </DataTable.Cell>
              </DataTable.Row>
              <DataTable.Row>
                <DataTable.Cell
                  onPress={() => {
                    Alert.alert(
                      'Log out?',
                      'Are you really sure to log out?',
                      [
                        {
                          text: 'Cancel',
                          onPress: () => closePopover(),
                          style: 'cancel',
                        },
                        {
                          text: 'OK',
                          onPress: () => {
                            closePopover();
                            logoutBtnPress();
                          },
                        },
                      ],
                      {cancelable: false},
                    );
                  }}>
                  <Button icon="logout">
                    <Text>Log Out</Text>
                  </Button>
                </DataTable.Cell>
              </DataTable.Row>
            </DataTable>
          </Popover>
        </View>
      )}
    </PopoverController>
  );
};

possible way to run actual application

I am making a testing scenarios using Ogurets framework which is cucumber + gherking combination. The tests are for Flutter application written in Dart language.

I recently figured out that test driver does not execute actual app. For example in my case:

I have and app where is a Login feature. After navigating to Logout and clicking on it, app does not return the login screen(it does back-end work though), but when I execute actual app through main.dart file and not using Ogurets configuration everything works as expected.

So I was wondering,

Is there any possible way to execute actual app during testing scenario? Lets say It can execute release version of the app.

Not sure if it makes sense.

Thx for possible tips

react testing library (RTL): test clearing of search input field on submit (after fetching)

Been staring too long at this 😳

I want to test that a search box does calls a handler (passed as prop) with the fetched results and resets the input field afterwards.

import React, { useState } from 'react'
import Axios from 'axios'

import './style.css'

function SearchBox({ setPhotos }) {
  const [searchTerm, setSearchTerm] = useState('')

  const handleTyping = (event) => {
    event.preventDefault()
    setSearchTerm(event.currentTarget.value)
  }

  const handleSubmit = async (event) => {
    event.preventDefault()

    try {
      const restURL = `https://api.flickr.com/services/rest/?method=flickr.photos.search&api_key=${
        process.env.REACT_APP_API_KEY
      }&per_page=10&format=json&nojsoncallback=1'&text=${encodeURIComponent(
        searchTerm
      )}`
      const { data } = await Axios.get(restURL)
      const fetchedPhotos = data.photos.photo
      setPhotos(fetchedPhotos)
      setSearchTerm('') // 👈 This is giving trouble
    } catch (error) {
      if (!Axios.isCancel(error)) {
        throw error
      }
    }
  }

  return (
    <section>
      <form action="none">
        <input
          aria-label="Search Flickr"
          placeholder="Search Flickr"
          value={searchTerm}
          onChange={handleTyping}
        />
        <button type="submit" aria-label="Submit search" onClick={handleSubmit}>
          <span aria-label="search icon" role="img">
            🔍
          </span>
        </button>
      </form>
    </section>
  )
}

export default SearchBox
import React from 'react'
import { render, fireEvent, waitFor, screen } from '@testing-library/react'
import userEvent from '@testing-library/user-event'
import { rest } from 'msw'
import { setupServer } from 'msw/node'

import SearchBox from '.'
import { act } from 'react-dom/test-utils'

const fakeServer = setupServer(
  rest.get(
    'https://api.flickr.com/services/rest/?method=flickr.photos.search',
    (req, res, ctx) =>
      res(ctx.status(200), ctx.json({ photos: { photo: [1, 2, 3] } }))
  )
)

beforeAll(() => fakeServer.listen())
afterEach(() => fakeServer.resetHandlers())
afterAll(() => fakeServer.close())

...

test('it calls Flickr REST request when submitting search term', async () => {
  const fakeSetPhotos = jest.fn(() => {})
  const { getByRole } = render(<SearchBox setPhotos={fakeSetPhotos} />)

  const inputField = getByRole('textbox', { name: /search flickr/i })
  const submitButton = getByRole('button', { name: /submit search/i })

  userEvent.type(inputField, 'Finding Walley')
  fireEvent.click(submitButton)

  waitFor(() => {
    expect(fakeSetPhotos).toHaveBeenCalledWith([1, 2, 3])
    waitFor(() => {
      expect(inputField.value).toBe('')
    })
  })
})

This is the error (which is caused because

Watch Usage: Press w to show more.
  ●  Cannot log after tests are done. Did you forget to wait for something async in your test?
    Attempted to log "Warning: Can't perform a React state update on an unmounted component. This is a no-op, but it indicates a memory leak in your application. To fix, cancel all subscriptions and asynchronous tasks in a useEffect cleanup function.
        in SearchBox (at SearchBox/index.test.js:38)".

      38 |           aria-label="Search Flickr"
      39 |           placeholder="Search Flickr"
    > 40 |           value={searchTerm}
         |       ^
      41 |           onChange={handleTyping}
      42 |         />
      43 |         <button type="submit" aria-label="Submit search" onClick={handleSubmit}>

Any help will be appreciated. Thanks!

I get error java.util.UnknownFormatConversionException: Conversion = '!'

I have a string

String pat = "HelloWorld#$%!";

@FindBy(xpath = "//*[contains(text(), 'Best Choice')]")
WebElement button;

final String format = String.format("%s", pat);
final String format = String.format("%s=!", pat);
button.getText().contains(format); 

When I'm trying to compare my button with text fails with java.util.UnknownFormatConversionException: Conversion = '!'

No material widget found in test

I am getting the No material widget found error when I run the following test, even though I am wrapping it in a MaterialApp. Any pointers to why that is? enter image description here

Detailed error log: The following assertion was thrown building _InkResponseStateWidget(gestures: [tap], mouseCursor: null, BoxShape.circle, dirty, state: _InkResponseState#3ef96): No Material widget found. _InkResponseStateWidget widgets require a Material widget ancestor. In material design, most widgets are conceptually "printed" on a sheet of material. In Flutter's material library, that material is represented by the Material widget. It is the Material widget that renders ink splashes, for instance. Because of this, many material library widgets require that there be a Material widget in the tree above them. To introduce a Material widget, you can either directly include one, or use a widget that contains Material itself, such as a Card, Dialog, Drawer, or Scaffold. The specific widget that could not find a Material ancestor was: _InkResponseStateWidget The ancestors of this widget were: ... InkResponse Positioned Stack GridTile Consumer<ExamInformation> ...

arrays for test and train set for CNN using python

I have a set of 4 images that i convert them to arrays to train a CNN

i found this method that says:

X_train, X_test, y_train, y_test = train_test_split(
...     X, y, test_size=0.33, random_state=42)
...

knowing that X_train is usually a full image (array) and y_train is a label that would tells us how the image is classified

i extracted 4 label arrays to satisfy the x train and train conditions

Now I 'm stuck in how can I put them. I mean x train is just one value and I have 4 arrays same for y train ( for the testalso I have 2 arrays for x test and y test)

Can anyone propose a solution?

Running all project tests at once, but also being able to run them in isolation (while having to provide input data paths)

I have the following folder structure for my tests:

    python-tests
        tests-folder-1
            input-data
                <files_with_input_data_for_tests_from_this_folder>
            test_1.py
            test_2.py
        tests-folder-2
            input-data
                <files_with_input_data_for_tests_from_this_folder>
            test_something_else_1.py
            test_something_else_2.py
        run.py

I want to be able to discover and run all tests at once, but also to be able to run them in isolation. The problem is the tests use external data (saved in input-data folders), and the run.py script when run makes the "current working directory" different then when the tests are run in isolation. This results in one of the aforementioned scenarios having the tests failed due to being unable to load the input data.

What would be the best way solve this problem, to be able to run such tests both all at once and in isolation?

The run.py python script looks as follows:

if __name__ == '__main__':
    suite = unittest.TestLoader().discover('.', pattern="test_*.py")
    unittest.TextTestRunner(verbosity=2).run(suite)

I would like to avoid modifying sys.path if possible.

Thrift library for Robot Framework


is there a thrift library to perform testing with Robot Framework?
For example to simulate a thrift client doing request towards a thrift server.

Universal agent is not diplayed on Agent type dropdown on qtest automation agent

I'm trying to integrate an automation script written in protractor with test management tool qtest. While trying to integrate it needs an agent type, and the agent type should be Universal Agent. But the drop down didn't display universal agent as an option. What did I miss? Here I attached the gif file

Redis / Homestead / Laravel 7 - Connection Refused

I am trying to get Redis working whilst following the Let's build a forum with TDD series. I have got to Episode 66 which introduces Redis and written the first test...and then boom, it blows up. Have done a lot of googling around but the answers do not seem to correlate.

I am using homestead and thus redis should be installed in the environment - using a Vagrant box on Windows.

I have installed predis and confirm this is pulling in via the vendor library and compose package

I have SSH'd into the homestead box and run redis-cli and performed ping /pong test to confirm that the redis server is infact running.

I have also run a redis-server --version version test.

Redis server v=5.0.8 sha=00000000:0 malloc=jemalloc-5.1.0 bits=64 build=129cf1a0751f12a

Following the tutorial I have written the first test

public function test_it_increments_a_threads_score_each_time_it_is_read()
    {

        $this->assertEmpty( Redis::zrevrange('trending_threads', 0, -1));

        $thread = create('App\Thread');

        $this->call('GET', $thread->path());

        Redis::zrevrange('trending_threads', 0, -1);

        $this->assertCount(1,  Redis::zrevrange('trending_threads', 0, -1));


    }

and all I get is

Predis\Connection\ConnectionException : No connection could be made because the target machine actively refused it. [tcp://127.0.0.1:6379]

I am really struggling to work out why/how this message is persisting when I have followed everything line by line, I am not using Xamp or anything other than Homestead yet still getting this error.

The config is the standard config, updated to the predis library

    'redis' => [

        'client' => env('REDIS_CLIENT', 'predis'),

        'options' => [
            'cluster' => env('REDIS_CLUSTER', 'redis'),
            'prefix' => env('REDIS_PREFIX', Str::slug(env('APP_NAME', 'laravel'), '_').'_database_'),
        ],

        'default' => [
            'url' => env('REDIS_URL'),
            'host' => env('REDIS_HOST', 'localhost'),
            'password' => env('REDIS_PASSWORD', null),
            'port' => env('REDIS_PORT', '6379'),
            'database' => env('REDIS_DB', '0'),
        ],

        'cache' => [
            'url' => env('REDIS_URL'),
            'host' => env('REDIS_HOST', '127.0.0.1'),
            'password' => env('REDIS_PASSWORD', null),
            'port' => env('REDIS_PORT', '6379'),
            'database' => env('REDIS_CACHE_DB', '1'),
        ],

    ],

My Homestead Yaml

---
ip: "10.100.110.10"
memory: 2048
cpus: 2
provider: virtualbox

authorize: ~/.ssh/id_rsa.pub

keys:
    - ~/.ssh/id_rsa

folders:
    - map: ~/code
      to: /home/vagrant/code

sites:

    - map: tweety.test.com
      to: /home/vagrant/code/tweety/public

    - map: bird.test.com
      to: /home/vagrant/code/birdboard/public

    - map: ecosystem.test.com
      to: /home/vagrant/code/ecosystem/public

    - map: multiform.test.com
      to: /home/vagrant/code/multi_upload/public


databases:
    - tweety
    - birdboard
    - ecosystem
    - multi_upload

features:
    - mariadb: false
    - ohmyzsh: false
    - webdriver: false

# ports:
#     - send: 50000
#       to: 5000
#     - send: 7777
#       to: 777
#       protocol: udp

Has anybody come across this before or have any idea on direction? It is driving me bonkers and is quite a big blocker in the series.

For reference - I am running Laravel 7 on Homestead via a Vagrant Box on my windows laptop. :)

Cheers all