jeudi 31 décembre 2020

IncompatibleClassChangeError Exception

I'm using OpenPojo for validating getter and setter methods of a class. But while trying to validate my getter and setter methods I'm having following exception:

java.lang.IncompatibleClassChangeError: class com.openpojo.reflection.java.bytecode.asm.SubClassCreator has interface org.objectweb.asm.ClassVisitor as super class

Here is my code:

public final class ClientToken implements RememberMeAuthenticationToken {

private static final long serialVersionUID = 3141878022445836151L;

private final String clientName;

private final Credentials credentials;

private String userId;

private boolean isRememberMe;

public ClientToken(final String clientName, final Credentials credentials) {
    this.clientName = clientName;
    this.credentials = credentials;
}

public void setUserId(final String userId) {
    this.userId = userId;
}

public void setRememberMe(boolean isRememberMe) {
    this.isRememberMe = isRememberMe;
}

public String getClientName() {
    return this.clientName;
}

public Object getCredentials() {
    return this.credentials;
}

public Object getPrincipal() {
    return this.userId;
}

@Override
public String toString() {
    return CommonHelper.toString(ClientToken.class, "clientName", this.clientName, "credentials", this.credentials,
            "userId", this.userId);
}

public boolean isRememberMe() {
    return isRememberMe;
}
}

Does anybody help me why this problem occurs.

CypressIO - How to return multiple objects from plugins file for use with Configuration API and Code Coverage?

Currently my cypress tests are using this example for multi environment configuration which has a method to return from plugins file. Now i want to measure code coverage for my application, when referring to this documentation, it also needs a different object to be returned from plugins file. From what i read, we can return only one object from the plugins file. How do i achieve having both multi environment and code coverage?

Thanks!

How to test a SQLite database for an iOS app

I am currently writing an iOS app that uses a SQLite database and am trying to learn how to write tests for it. I can't figure out how to test database-related functionalities using XCTest. If this is something that can be done using XCTest, what is the approach? If not, is there a different framework for testing a SQLite database?

how I test connection to rdp with user and password via php [closed]

how I test connection to rdp with

user and password

via php code or laravel package

How to test json structure in spring boot

I finded this code to test is a json is equal to a string

@RunWith(SpringRunner.class)
@SpringBootTest
@AutoConfigureMockMvc
public class MockMvcExampleTests {

    @Autowired
    private MockMvc mvc;

    @Test
    public void exampleTest() throws Exception {
        this.mvc.perform(get("/")).andExpect(status().isOk())
                .andExpect(content().string("Hello World"));
    }

}

Do you know how to test the json structure ? Like check if the object contains id, name, age ... whatever the values are Thanks

How to remove the assertion error in python test code?

These are the testing methods for prediction

def test_get_prediction(input_params, output_params):
  exec_log.info("Testing started for prediction")
  response = aiservice.get_prediction(input_params)
  assert response['predicted_label'] != None
  test_insight(response)

def test_insight(response):

  if('featureImportance' in response['Insight']):
    data_table = response['Insight']['featureImportance']['data']
    assert len(data_table) > 0
  elif(response['Insight'] == 'nearestNeighbors'):
    data_dict = response['Insight']['nearestNeighbors']
    assert len(data_dict) > 0
  elif('modelCoefficients' in response['Insight']  ):
    data_table = response['Insight']['modelCoefficients ']['data']
    assert len(data_table) > 0

When I test using pytest in the vs code always gives error as

================================== FAILURES ===================================
_______________ test_workflow[Classification: Credit Approval] ________________

caplog = <_pytest.logging.LogCaptureFixture object at 0x0000026B1EF15EE0>
test_name = 'Classification: Credit Approval'
test_params = [{'dataImport': {'expectedOutput': {'result': 'success'}, 'input': {'file_location': 'AWS', 
'test_filename': 'credit_d...': 3, 'yAxisLabel': 'Debt', ...}, 'input': {'Age': 30, 'BankCustomer': 
'g', 'Citizen': 'g', 'CreditScore': '1', ...}}}]

  @pytest.mark.parametrize("test_name,test_params", TEST_PARAMS.items(), ids=list(TEST_PARAMS.keys()))
  def test_workflow(caplog,test_name,test_params):

    response = login()

    ai_service = create_ai_service()

    assert response == True
    assert len(ai_service) > 0
    problem_type = ''
    exec_log.info("testing started for " + test_name)
    for stage in test_params:
        for test_key, test_val in stage.items():
            if(test_key == 'dataImport'):
                test_upload_and_import_data(test_val['input'], test_val['expectedOutput'])
            elif(test_key == 'featureEngineering'):
                problem_type = test_feature_eng(test_val['input'], test_val['expectedOutput'])
            elif(test_key == 'training'):
                test_training(test_val['input'], test_val['expectedOutput'], problem_type)
            elif(test_key == "askAI"):
>               test_get_prediction(test_val['input'], test_val['expectedOutput'])

tests\test_pytest.py:87: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

input_params = {'Age': '30.83', 'BankCustomer': 'g', 'Citizen': 'g', 'CreditScore': '1', ...}
output_params = {'insight': 'featureImportance'}

@allure.step
def test_get_prediction(input_params, output_params):
    exec_log.info("Testing started for prediction")
    response = aiservice.get_prediction(input_params)
>   assert response['predicted_label'] != None
E   assert None != None

tests\test_pytest.py:210: AssertionError

This always gives the prediction result as

{'statusCode': 400, 'body': '{"predicted_label": null, "Message": "The url you are trying to access is not accepting requests, please check the url provided by Navigator"}', 'headers': {'Content-Type': 'application/json', 'Access-Control-Allow-Origin': '*'}}

I need 'statusCode': 'Completed','body': '{"predicted_label": "+" ........

How to set up tests for a Dart HTTP server

I'm trying to build a Dart HTTP server and I want to test the API. I'm not able to set up the tests, though.

Here is what I have so far in my_server_test.dart:

import 'dart:io';

import 'package:my_server/my_server.dart';
import 'package:test/test.dart';

void main() {
  HttpServer server;
  setUp(() async {
    final server = await createServer();
    await handleRequests(server);
  });

  tearDown(() async {
    await server.close(force: true);
    server = null;
  });

  test('First try', () async {
    
    final client = HttpClient();
    final request = await client.get(InternetAddress.loopbackIPv4.host, 4040, '/');
    final response = await request.close();
    print(response);
    
  });
}

And here is the server code in my_server.dart:

import 'dart:io';

import 'package:hundetgel_server/routes/handle_get.dart';

Future<HttpServer> createServer() async {
  final address = InternetAddress.loopbackIPv4;
  const port = 4040;
  return await HttpServer.bind(address, port);
}

Future<void> handleRequests(HttpServer server) async {
  await for (HttpRequest request in server) {
    switch (request.method) {
      case 'GET':
        handleGet(request);
        break;
      default:
        handleDefault(request);
    }
  }
}

void handleGet(HttpRequest request) {
  request.response
    ..write('Hello')
    ..close();
}

void handleDefault(HttpRequest request) {
  request.response
    ..statusCode = HttpStatus.methodNotAllowed
    ..write('Unsupported request: ${request.method}.')
    ..close();
}

When I run the test I just get a timeout:

TimeoutException after 0:00:30.000000: Test timed out after 30 seconds. See https://pub.dev/packages/test#timeouts
dart:isolate  _RawReceivePortImpl._handleMessage
NoSuchMethodError: The method 'close' was called on null.
Receiver: null
Tried calling: close(force: true)
dart:core                              Object.noSuchMethod
2
main.<fn>
test/my_server_test.dart:15
===== asynchronous gap ===========================
dart:async                             _completeOnAsyncError
test/my_server_test.dart        main.<fn>
test/my_server_test.dart:1
main.<fn>
test/my_server_test.dart:14
2

✖ First try
Exited (1)

How do I set up the server so I can start testing it?

mercredi 30 décembre 2020

C++ problem returning the correct pattern from given interleaved pattern

I'm trying to implement a function in C++ that gets two inputs, one input is an array of integers value (binary values just zero or one) , the second input is an integer value one of those values: 1 , 2, 4, 8.

the function returns the correct pre-mapped pattern according to this table: multiple Mapped Value = 0 Mapped Value= 1 1 0 1 2 00 11 4 1100 0011 8 11001100 00110011

I mean by pre-mapped pattern, a pattern that it was before mapping it according to the table !.

to elaborate more, I'm giving examples: (Remap is the name of the function, we always read the given input array from left to right ..)

Ramp(given array, multiply value)

first example: Remap({0,1} , 1 ) => returns an integer array according to the table {0, 1}; (we see that 0 is mapped to value 0 , 1 is mapped to value 0 if the multiply=1)

second example: Remap({1 ,1 , 0 , 0} , 2 ) => returns an integer array according to the table {1 ,0}; (we see that 11 is mapped to value 1 , 00 is mapped to value 0 if the multiply=2, so the output is {1,0})

third example: Remap({1 ,1 , 0 , 0 , 0 , 0 , 1 , 1} , 4 ) => returns an integer array according to the table {0 ,1}; (we see that 1100 is mapped to value 0 , 0011 is mapped to value 1 if the multiply=4 so output is {0,1})

third example: Remap({1 ,1 , 0 , 0 , 1 , 1, 0 , 0} , 8) => returns an integer array according to the table {0}; (we see that 11001100 is mapped to value 0 if the multiply=8 so output is {0} )

the given array is always in its rules according to the table, this means there's no case like the patterns in the given array aren't the same as the patterns on the table, for instance this array {001} with multiply 2 isn't possible .. !

In brief we are returning the pre-mapped pattern according to the attached table of mapping. (the attached table is given by default in the question itself) My implementation code C++ is:

std::vector Remap(int* givenArray , int multiply)
{
     vector<int> ret;                                                      // result's utput
  switch (multiply)
   {   case 1
          int l[2]={0,1};                                               // for first opition in the table
          for (int i=0;i<2;i++)
              ret.puch_back({0,1});
      case 2
          int l[4]={0 ,0 ,1 ,1};                                           // for second opition in the table
          for (int i=0;i<4;i++)
              ret.puch_back({0 ,0 ,1 ,1});
      case 4
          int l[8]={0 ,0,1 ,1 ,1 ,1 ,0 ,0};                                  // for third opition in the table
          for (int i=0;i<8;i++)
              ret.puch_back({0 ,0,1 ,1 ,1 ,1 ,0 ,0});
      case 8
          int l[16]={0 ,0,1,1,1,1,0,0 ,1,1,0,0,0,0,1,1};                // for fourth opition in the table
          for (int i=0;i<16;i++)
              ret.puch_back({0 ,0,1,1,1,1,0,0 ,1,1,0,0,0,0,1,1});
   }
  return ret // returning the pre-mapped array to the function main
}

int main()
{
    int MappedPattern[4]={0,0,1,1};
    vector<int> PremappedPattern=Remap(MappedPattern , 2 ) // it should return {0,1} because 00 -> 0 , 
     //11->1 according to the table over multiply =2
    return 0;
}

What's wrong with my code? any help please in my implementation for the problem? honestly Im stuck on this about one week, so it would be appreciated if there's any assistance to solve my problem or any other algorithms suggestions for my problem!

THANKS ALOT.

Does using an in-memory database + seeders in an integration test is a good practice?

We have recently started writing more unit tests and fewer integration tests with the understanding that unit tests allow us to detect bugs in a more focused way; Of course, this has a price because writing unit tests means we need to create lots of mocks and spent more time.

My question concerns database and repository tests:

  1. Given that I have a repository (of todos for example) that performs simple CRUD operations in the database (almost no business logic), should I test it? And if so - what to do with the model in such a case? Am I creating a mock for it?

  2. My application contains the following classic flow: Controller -> Service -> Repository -> DB. I test this flow through an integration test where I perform an HTTP request to my controller and expect to eventually see "traffic" in the database (deletion, retrieval, creation, etc.). Is it right to do that? And if so - is it better to use an in-memory database or a database for the TEST environment? Does that mean I have to use database seeding before the actions I test?

Why HttpTestingController method expectOne does not work with [ngb-pagination] directive?

When I imitate a click on element generated by Bootstrap directive I can not mock http call. If I work with generated by directive html directly everything is fine.

Here I have a simple component with two variants of paginator. One of them is generated by 'bootstrap' directive another one - without it.

@Component({
  // does not work
  // template: `
  //     <ngb-pagination 
  //       [collectionSize]="1" 
  //       [pageSize]="1" 
  //       [page]="1" 
  //       (pageChange)="nextPg($event)">
  //     </ngb-pagination>
  //   `,
  // works
    template: `
      <ul class="pagination">
        <li class="page-item disabled">
          <a class="page-link"></a>
        </li>
        <li class="page-item">
          <a class="page-link" (click)="nextPg()"> 1 </a>
        </li>
        <li class="page-item">
          <a class="page-link"></a>
        </li>
      </ul>
    `
})
export class QuestionComponent {
  public dataFromServer: string;

  constructor(
    private readonly httpClient : HttpClient
  ) { }

  public nextPg() {
    this.httpClient.get<string>('some_url')
      .subscribe(x => this.dataFromServer = x);
  }
}

And I have a test in order to verify HTTP call

describe('test', () => {
  let component: QuestionComponent;
  let fixture: ComponentFixture<QuestionComponent>;

  let httpMock: HttpTestingController;

  beforeEach(() => {
    TestBed.configureTestingModule({
      declarations: [
        QuestionComponent
      ],
      imports: [
        NgbModule,
        HttpClientTestingModule
      ],
      providers: []
    });

    fixture = TestBed.createComponent(QuestionComponent);
    component = fixture.componentInstance;
    httpMock = TestBed.inject(HttpTestingController);
  });

  it('test', () => {
    fixture.detectChanges();

    const pageItem = fixture.nativeElement.querySelectorAll('.page-link')[1];
    pageItem.click();

    httpMock.expectOne('some_url');
  });

});

How do I login to salesforce by using cypress?

I'm currently using cypress to do some testing. However, I have to do some tests with salesforce and it seems that I'm getting the following issue 'Whoops, there is no test to run.'

context('Salesforce', () => {
    beforeEach(() => {
      cy.request("https://test.salesforce.com/?un=username%40domain_name&pw=user_password&startURL=%2F001")
    })
    it('.click() - click on a DOM element', () => {
      
      // load opportunity page
      cy.visit('https://test.salesfroce.com/lightning/page/home')
      // optional. let's the page load during the test runner
      cy.wait(7000)
      
      //cy.get('#username').type('username')
      //cy.get('#password').type('password')
      //cy.contains('Log In to Sandbox')


    })
})

enter image description here

Does anyone know how to bypass the login page with cypress?

How to select multiple checkboxes in Dusk test

I am using VueJS with bootstrap-vue, and using Laravel Dusk, I need to test a table inside a modal that uses a checkbox to select each row. In this particular case, there are multiple rows with checkboxes, and I need to select all of the checkboxes in the form and then submit the form. My test works fine with the modal in every way except checking the checkboxes; no matter what I try, I can't get the check() (or click()) method to check more than the first one. I've tried

->check("input[type=checkbox]")

and using the name

->check("input[name='my-custom-name']")

but I still only get the first item checked. Based on this, I tried something like

$checkboxes = $browser->driver->findElements(WebDriverBy::name('my-custom-name[]'));
for ($i = 0; $i <= count($checkboxes); $i++) {
    $checkboxes[0]->click();
}

but even that only checks the first checkbox. What do I need to do to select all of the checkboxes in the table?

Laravel Dusk Tinymce

maybe someone knows how can I insert text into tinymce on Dusk testing?

<textarea class="form-control" name="text" cols="50" rows="10" id="text" style="display: none;" aria-hidden="true"></textarea>

Mock a test with a external api rest in Django Rest Framework

Context

I am developing an API Rest with Django Rest Framework, and some of my endpoints consume an external API Rest to create a model instance, the code works but right now I am trying to test my code but I have very struggled with that. I am a little beginner developing tests and maybe the question is very obvious but, I tried so many things, with the below links but I couldn't achieve it.

Mocking external API for testing with Python

How do I mock a third party library inside a Django Rest Framework endpoint while testing?

https://realpython.com/testing-third-party-apis-with-mocks/

Codes

This is my view

from rest_framework import viewsets, status
from rest_framework.decorators import action
from rest_framework.response import Response
from app.lnd_client import new_address
from app.models import Wallet
from api.serializers import WalletSerializer



class WalletViewSet(viewsets.ModelViewSet):
    queryset = Wallet.objects.all()
    serializer_class = WalletSerializer
    lookup_field = "address"

    @action(methods=["post"], detail=False)
    def new_address(self, request):
        response = new_address()
        if "error" in response.keys():
            return Response(
                data={"error": response["error"]}, status=response["status_code"]
            )
        else:
            return Response(response)

    def create(self, request, *args, **kwargs):

        response = self.new_address(self)
        if response.status_code >= 400:
            return response

        serializer = self.get_serializer(data=response.data)
        serializer.is_valid(raise_exception=True)
        self.perform_create(serializer)
        headers = self.get_success_headers(serializer.data)
        return Response(
            serializer.data, status=status.HTTP_201_CREATED, headers=headers
        )

This is my client with the call to the external api

import requests
from django.conf import settings
from typing import Dict

endpoint = settings.LND_REST["ENDPOINT"]
endpoint = endpoint[:-1] if endpoint.endswith("/") else endpoint
endpoint = (
    "https://" + endpoint if not endpoint.startswith("http") else endpoint
)
endpoint = endpoint

auth = {"Grpc-Metadata-macaroon": settings.LND_REST["MACAROON"]}
cert = settings.LND_REST["CERT"]

def new_address() -> Dict:

    try:
        r = requests.get(
            f"{endpoint}/v1/newaddress", headers=auth, verify=cert
        )
        r.raise_for_status()
    except requests.exceptions.HTTPError as errh:
        return {
            "error": errh.response.text,
            "status_code": errh.response.status_code,
        }
    except requests.exceptions.RequestException:
        return {
            "error": f"Unable to connect to {endpoint}",
            "status_code": 500,
        }

    return r.json()

My test is this

from django.urls import reverse
from nose.tools import eq_
from rest_framework.test import APITestCase
from unittest.mock import patch
from rest_framework import status
from .factories import WalletFactory
from app.lnd_client import new_address

class TestWalletListTestCase(APITestCase):
    def setUp(self):
        WalletFactory.create_batch(5)
        self.url = reverse("wallet-list")
        self.wallet_data = WalletFactory.generate_address()

    def test_get_all_wallets(self):
        response = self.client.get(self.url)
        eq_(response.status_code, status.HTTP_200_OK)

    #This is the problem test
    @patch('app.lnd_client.requests.post')
    def test_create_wallet(self,mock_method):
        mock_method.return_value.ok = True
        response = self.client.post(self.url)
        eq_(response.status_code, status.HTTP_201_CREATED)

React Test for a Component Every Time its Rendered/Used

Does anyone know if there is a way to test a React component every time and everywhere it is used? That is without having to write a test for the specific page it is being used on.

For example if I have a Button component, can there be a test for that component that is run whenever/wherever it's used?

Pytest: How to mock a class method inside a for loop?

I have a method class method which is inside a for loop that I would like to mock using pytest. The method has some default named arguments.

Here is something I have tried:

import pytest
import my_module

def test_dummy_method(monkeypatch):
    
    def mockreturn(**kwargs):
        
        dummy_mocked_values = [1, 2, 3]
        
        for val in dummy_mocked_values:
            yield val

    dummy_mock = mockreturn()
    monkeypatch.setattr(my_module, 'dummy_method', lambda x: next(dummy_mock))

    list_return_values = []
    
    for i in range(3):
        result_ = my_module.dummy_method(dummy_arg_1="foo", dummy_arg_2=42) # here, I am expecting result_ to be 1, 2 and 3 successively...
        list_return_values.append(result_)
    
    assert list_return_values == [1, 2, 3]
    

But I am getting the following error:

FAILED dummy_repo/tests/test_dummy_method.py::test_dummy_method- TypeError: <lambda>() got an unexpected keyword argument 'dummy_arg_1'

because the signature of the lambda function does not match the mock dummy_method to be testes

How can I use monkeypatch.setattr to yield different values from a mocked method on each call in a pythonic and clean way?

I know that one can do that with MagickMock but if possible I would like to use only pytest.

Thanks!

How can I change settings in a remote service during testing?

I have a number of integration tests that talk to a remote service. One of the things I need to test is several authentication methods but to do that I need to edit a configuration file as a superuser. What is the best way to automate that change during testing?

Is Taiga Agile provide features for testers? [closed]

I recently start work and they ask me to start writing user stories and test cases in Taiga Agile and i don't know where should i write the test cases. Please let me know if you know where i can write it.

What is the features Taiga Agile provide for testers? There is any feature like add test case to the project ??

Thanks all.

mardi 29 décembre 2020

Using multiple Cucumber Runners on the same project under different packages is a good practice?

I want to use Cucumber for testing all the API's as TestNG tests, inside my microservice which is a SpringBootApplication.

So I've defined for the beginning 2 feature files under: src/test/resources/features:

  • API1.feature
  • API2.feature

Then, for each feature file, I have implemented the Step Definitions classes and CucumberRunners under src/test/cucumber:

/API1:

  • StepDefAPI1.java annotated with @CucumberContextConfiguration
  • CucumberRunner.java - extends AbstractTestNGCucumberTests

/API2:

  • StepDefAPI2.java annotated with @CucumberContextConfiguration
  • CucumberRunner.java - extends AbstractTestNGCucumberTests

Is this a good practice to apply, having one CucumberRunner for each feature file?

Because otherwise, letting just one CucumberRunner.java class with one Configuration class (annotated with @CucumberContextConfiguration) will lead to unexpected behavior when the tests are being ran on Jenkins (they are failing due to some MongoDB exceptions [https://ift.tt/38KiWMB] while locally is all smooth).

Issue with webdriveIO V6 draganddrop method

I've tried drag and drop using dragAndDrop & performAction, but none of them working for Vue.Draggable (https://sortablejs.github.io/Vue.Draggable/#/two-lists), can anyone share a solution if possible.

"React.createContext is not a function" only when i using emotion/styled and @testing-library/react

i am using emotion/styled with next.js and typescript. it works well on "yarn dev" and css/html works very well too but, when i testing with "@testing-library/react" it's return error "React.createContext is not a function"

actually, i was search on google this error. and i found it's module version crash problem so deleted node_modules and i was reinstall modules, but it's still return error.

i think this error is emotion library related. because only when is using emotion library, it's happend.

Network Hardware Resource Management Tool for CI/CD Automation (preferably in Python)

I'm looking for some kind of hardware "resource" management tool (preferably in Python, since that's what I'm most comfortable with, but other languages are welcome too!) that I can use to automate the process of "checking out" or allocating resources for automated testing as part of CI/CD workflows. To be clear, by "hardware resource" I mean something similar to a Raspberry Pi. It's not a Raspberry Pi that I'm testing, but that's probably a good place to start the example. And if I'm being honest, it's more like every model of the Raspberry Pi (all the 2/3/4 derivatives, etc.).

Imagine a test "network" with a central management node (system) that "hands out" access to various models of "Raspberry Pi" to various CI/CD workflows that are asking for them. Say my CI/CD system sees a new commit on some project branch, kicks off a new test build, and thus needs to test the build on a Pi 3B+ and a Pi 4B. So, it needs to ask for those devices to be "checked out" as needed. When those workflows are done, those two devices can be "checked back in" and go back to a dormant state, waiting for the next build to come along.

Again, to reiterate, I'm really looking for something that just hands out "access" to these devices. It doesn't need to build anything or perform testing itself.

Some key things that I'm hoping to achieve:

  • Notifications: I'd love to hear when a device seems to be acting weird (as they inevitably will) so that I can fix it ASAP
  • RESTful API access to request a "resource" for checkout, and conversely to return it upon completion of any/all tests
  • Timeouts: If a CI/CD workflow doesn't return a resource after a set period of time, that device is "recalled" (as best it can), and if the device seems "unresponsive" then a notification can be sent to an administrator
  • Ability to extend a "framework" would be nice. There's a fair bit of firmware involved, and the ability to upgrade/downgrade/etc. directly with this system would be phenomenal!

Things I'm NOT Looking for:

  • Full CI/CD build server: I'm already planning to use Jenkins to build things and run the tests, I just need something to hand Jenkins and my workflows the appropriate devices when needed.

I've done some research, and keep ending up empty handed in this regard, so I'd like to know if anyone's aware of such a framework that I could use. I've considered writing my own system with FastAPI (still might go that way if I can't find a better alternative) but I'd really like to get something else that already exists to help me get this out the door (so to speak).

How to get Testcafe to fail on the not allowed CORS headers

I'm testing that the production web app has only allowed CORS HTTP headers. In my tests, the sent illegal CORS headers bypass the browser validations since the request goes and comes from a proxy URL located on the same origin. for example:

Request URL: http://192.168.0.139:43561/T87i44N5N*DrJe7nq7U/https://example.com/cart

My HTTP request contains the following custom HTTP header:

evil-header: break-it

In the response, this header is not allowed in the access-control-allow-headers value and supposed to cause a CORS error.

access-control-allow-headers: authorization, accept-currency, accept-language, cookie, content-type, context-campaign, context-container-name, csrf-token, customer-id, experiments, geo-ip-country, origin, referer, session-id, visitor-id

Despite this, the request returns with status code 200 and the test does not fail.

How to match default properties with Flutter widget testing

I'm testing a custom progress bar widget.

I initialize the buffered parameter with a default value of Duration.zero:

class ProgressBar extends LeafRenderObjectWidget {
  const ProgressBar({
    Key key,
    @required this.progress,
    @required this.total,
    this.buffered,
    this.onSeek,
    this.barHeight = 5.0,
    this.baseBarColor,
    this.progressBarColor,
    this.bufferedBarColor,
    this.thumbRadius = 10.0,
    this.thumbColor,
    this.timeLabelLocation,
  }) : super(key: key);

  final Duration progress;
  final Duration total;
  final Duration buffered;
  final ValueChanged<Duration> onSeek;
  final Color baseBarColor;
  final Color progressBarColor;
  final Color bufferedBarColor;
  final double barHeight;
  final double thumbRadius;
  final Color thumbColor;
  final TimeLabelLocation timeLabelLocation;

  @override
  _RenderProgressBar createRenderObject(BuildContext context) {
    final theme = Theme.of(context);
    final primaryColor = theme.colorScheme.primary;
    final textStyle = theme.textTheme.bodyText1;
    return _RenderProgressBar(
      progress: progress,
      total: total,
      buffered: buffered ?? Duration.zero, //        <-- here
      onSeek: onSeek,
      barHeight: barHeight,
      baseBarColor: baseBarColor ?? primaryColor.withOpacity(0.24),
      progressBarColor: progressBarColor ?? primaryColor,
      bufferedBarColor: bufferedBarColor ?? primaryColor.withOpacity(0.24),
      thumbRadius: thumbRadius,
      thumbColor: thumbColor ?? primaryColor,
      timeLabelLocation: timeLabelLocation ?? TimeLabelLocation.below,
      timeLabelTextStyle: textStyle,
    );
  }

  @override
  void updateRenderObject(
      BuildContext context, _RenderProgressBar renderObject) {
    final theme = Theme.of(context);
    final primaryColor = theme.colorScheme.primary;
    final textStyle = theme.textTheme.bodyText1;
    renderObject
      ..progress = progress
      ..total = total
      ..buffered = buffered ?? Duration.zero //        <-- and here
      ..onSeek = onSeek
      ..barHeight = barHeight
      ..baseBarColor = baseBarColor ?? primaryColor.withOpacity(0.24)
      ..progressBarColor = progressBarColor ?? primaryColor
      ..bufferedBarColor = bufferedBarColor ?? primaryColor.withOpacity(0.24)
      ..thumbRadius = thumbRadius
      ..thumbColor = thumbColor ?? primaryColor
      ..timeLabelLocation = timeLabelLocation ?? TimeLabelLocation.below
      ..timeLabelTextStyle = textStyle;
  }
}

However, when I test the buffered property the test fails.

Here is my test:

import 'dart:ui';

import 'package:audio_video_progress_bar/audio_video_progress_bar.dart';
import 'package:flutter_test/flutter_test.dart';

void main() {

  testWidgets('Default properties are OK', (WidgetTester tester) async {
    await tester.pumpWidget(
      ProgressBar(
        progress: Duration.zero,
        total: Duration(minutes: 5),
      ),
    );

    ProgressBar progressBar = tester.firstWidget(find.byType(ProgressBar));
    expect(progressBar, isNotNull);

    expect(progressBar.progress, Duration.zero);      // OK
    expect(progressBar.total, Duration(minutes: 5));  // OK
    expect(progressBar.barHeight, 5.0);               // OK
    expect(progressBar.onSeek, isNull);               // OK
    expect(progressBar.thumbRadius, 10.0);            // OK

    expect(progressBar.buffered, Duration.zero);      // Fails
  });
}

The double defaults were OK but the buffered duration default failed:

══╡ EXCEPTION CAUGHT BY FLUTTER TEST FRAMEWORK ╞════════════════════════════════════════════════════
The following TestFailure object was thrown running a test:
  Expected: Duration:<0:00:00.000000>
  Actual: <null>

Do I have to do more than just pump the widget to allow the defaults to be applied?

Test latlng in Firestore

I've been trying to test the GeoPoint object from Firestore, here's the code that I have:

request.resource.data.latLng is latlng;

and here's the portion of my test:

await firebase.assertSucceeds(userLocationRef.set({
            "name": "Some name",
            "locationString": "Some location",
            "objectID": "objectID",
            "phoneNumber": "Some phone number",
            "latLng": {
                "latitude": 9.82,
                "longitude": -82.877
            }
        }))

With this code my test doesn't pass, I there a way to test a GeoPoint from Firestore?

Writing tests with Jasmine for vanilla JS

I followed Dev Eds course on creating a todo list, but now I want to write some tests to test the code. I want to use Jest but I'm not sure how to write a test to make sure that a todo is created, and deleted.

I've added the app.js file below (there is are html/css files as well). My attempt at writing a test is under the app.js file.

//Selectors
const todoInput = document.querySelector('.todo-input');
const todoButton = document.querySelector('.todo-button');
const todoList = document.querySelector('.todo-list');
const filterOption = document.querySelector('.filter-todo');

//Event Listeners
todoButton.addEventListener('click', addTodo);
todoList.addEventListener('click', deleteCheck);
filterOption.addEventListener('change', filterTodo);

//Functions
function addTodo(event){
    //console.log(event.target);
    // prevent form from submitting
    event.preventDefault();
    const todoDiv = document.createElement("div");
    todoDiv.classList.add("todo");
    //create LI
    const newTodo = document.createElement('li');
    newTodo.innerText = todoInput.value;
    newTodo.classList.add('todo-item');
    todoDiv.appendChild(newTodo);
    // Checkmark button
    const completedButton = document.createElement('button');
    completedButton.innerHTML = '<i class="fas fa-check"></i>'
    completedButton.classList.add("complete-btn");
    todoDiv.appendChild(completedButton);
    // Delete button
    const deleteButton = document.createElement('button');
    deleteButton.innerHTML = '<i class="fas fa-trash"></i>'
    deleteButton.classList.add("delete-btn");
    todoDiv.appendChild(deleteButton);
    // Append to list
    todoList.appendChild(todoDiv);
    // clear todo input value
    todoInput.value = "";
}

function deleteCheck(e){
    //console.log(e.target);
    const item = e.target;
    if(item.classList[0] === 'delete-btn'){
        const todo = item.parentElement;
        // animation
        todo.classList.add("fall");
        todo.addEventListener("transitionend", function() {
            todo.remove();
        });
    }
    // check mark
    if(item.classList[0] === "complete-btn"){
        const todo = item.parentElement;
        todo.classList.toggle('completed');
    }
}

function filterTodo(e) {
    const todos = todoList.childNodes;
    todos.forEach(function(todo) {
      switch (e.target.value) {
        case "all":
          todo.style.display = "flex";
          break;
        case "completed":
          if (todo.classList.contains("completed")) {
            todo.style.display = "flex";
          } else {
            todo.style.display = "none";
          }
          break;
        case "uncompleted":
          if (!todo.classList.contains("completed")) {
            todo.style.display = "flex";
          } else {
            todo.style.display = "none";
          }
      }
    });
  }

The first test I created was to check if a todo is created

const todo = require('./app')

test('add a todo', () => {
    expect(todo("buy milk".toBe("buy milk")))
})

But the test failed.

Karate how match work with Expected value

I would like to know how "Karate test" do Expected check with match. Let say I have below expected value def expectedResponse = { Success:true,Name:'#regex '} And match response == expected Response

Let say if Success : false in the response , will regex check will happen or not ? I mean will the check terminates since Success is false?does this uses "AND" operator internally

Forcing instantiation of all of a template classes member functions

During initial development of a template class, before I've written full test cases, I am finding that I would like the ability to force the compiler to generate the code for every member (including non-static ones) of a template class for a specific set of template parameters, just to make sure all the code at least compiles.

Specifically, I'm using GCC 9 (I don't really need this ability for other compilers, since it's just something I want to temporarily make happen during development); and its c++14, c++17, c++2a and c++20 standards.

For example, I might write the following code:

template <typename D> struct test_me {
  D value;
  void mistake1 () { value[0] = 0; }
  void mistake2 () { mistake1(value); }
  // and a bajillion other member functions...
};

And, given that I know in advance the finite set of possible template parameters (let's say int and float here), I just want to make sure they at least compile while I'm working.

Now I can do this, which obviously falls short:

int main () {
  test_me<int> i;
  test_me<float> f;
}

Since mistake1 and mistake2 aren't generated, the code compiles fine despite the indexed access attempt to a non-array type, and the incorrect function call.

So what I've been doing during development is just writing code that calls all the member functions:

template <typename D> static void testCalls () {
    test_me<D> t;
    t.mistake1();
    t.mistake2();
    // and so on... so many mistakes...
}

int main () {
   testCalls<int>();
   testCalls<float>();
}

But this gets to be a pain, especially when the member functions start to have complex side effects or preconditions, or require nontrivial parameters, or have non-public members and not-yet-developed friends. So, I'd like a way to test compilation without having to explicitly call everything (and, ideally, I'd like to be able to test compilation without modifying any "test" code at all as I add new members).

So my question is: With at least GCC 9, is there a way to force (potentially temporarily) the compiler to generate code for a template class's entire set of members, given template parameters?

unittest for method that it reads lines

I am learning unit test and I want to write a testing method for proposals_extract() how can I use unittest for a function which reads line by line of a huge file?

class Conference:

    def __init__(self):
        self.talks = self.proposals_extract()
        self.Track = 1
        
    def proposals_extract(self):
        talks = {}
        f = open('input.txt')
        try:
            while True:
                line = f.readline()
                duration = ''.join(filter(lambda i: i.isdigit(), line))
                if duration:
                    talks[line] = duration
                elif duration == []:
                    print('for proposal {} no duration has been detected'.format(line))
                    break
                if not line:
                    break         
        except FileNotFoundError as e:
                print(e)
        f.close()
        return talks

Selenium automation programming questions which Credit Swiss asks in programming test (HackerEarth test)

What programming questions Credit Swiss asks in programming test (HackerEarth test) for QA selenium automation with java experience testers

Cypress - Access Dropdown Options - Detached from DOM

Have already reviewed this post (and a few others but this is the most similar), read through Cypress docs about this error message here. I know it's some kind of race error, but can't for the life of me get it right!

I have a drawer containing a dropdown menu. I want to click the first dropdown menu, and select the first option.

Here's a simplified version of my test:

it('Adds new involved party to a case', () => {
  // click button to open drawer that contains dropdown
  cy.contains(' Beteiligter').should('exist').click();

  // check that title inside drawer exists to be sure it's open
  // then get the first form item (dropdown) and click it
  cy.contains('Beteiligter erstellen')
    .should('exist')
    .get('.ant-form-item')
    .eq(0)
    .click();

  // click submit button to create involvedParty
  cy.contains('Erstellen').click();

  // check that new party has been added to UI
  cy.get('.ant-collapse-header')
    .contains('Cypress Test Company')
    .should('have.length', 1);
  )
}

If I add a check in Cypress it shows my '.ant-form-item' at index 0 IS visible and DOES exist on the DOM, but I get this error in Cypress:

Timed out retrying: cy.click() failed because this element is detached from the DOM.

<div class="ant-row ant-form-item">...</div>

Cypress requires elements be attached in the DOM to interact with them.

The previous command that ran was:

  > cy.eq()

This DOM element likely became detached somewhere between the previous and current command.

Common situations why this happens:
  - Your JS framework re-rendered asynchronously
  - Your app code reacted to an event firing and removed the element

You typically need to re-query for the element or add 'guards' which delay Cypress from running new commands.

This is the HTML for my dropdown:

<div class="ant-row ant-form-item">
  <div class="ant-col ant-form-item-control">
    <div class="ant-form-item-control-input">
      <div class="ant-form-item-control-input-content">
        <div class="ant-select ant-select-single ant-select-show-arrow">
          <div class="ant-select-selector"><span class="ant-select-selection-search"><input id="control-ref_role" autocomplete="off" type="search" class="ant-select-selection-search-input" role="combobox" aria-haspopup="listbox" aria-owns="control-ref_role_list" aria-autocomplete="list" aria-controls="control-ref_role_list" aria-activedescendant="control-ref_role_list_7" readonly="" unselectable="on" value="" style="opacity: 0;" aria-expanded="false"></span>
            <span
              class="ant-select-selection-placeholder"></span>
          </div><span class="ant-select-arrow" unselectable="on" aria-hidden="true" style="user-select: none;"><span role="img" aria-label="down" class="anticon anticon-down ant-select-suffix"><svg viewBox="64 64 896 896" focusable="false" data-icon="down" width="1em" height="1em" fill="currentColor" aria-hidden="true"><path d="M884 256h-75c-5.1 0-9.9 2.5-12.9 6.6L512 654.2 227.9 262.6c-3-4.1-7.8-6.6-12.9-6.6h-75c-6.5 0-10.3 7.4-6.5 12.7l352.6 486.1c12.8 17.6 39 17.6 51.7 0l352.6-486.1c3.9-5.3.1-12.7-6.4-12.7z"></path></svg></span></span>
        </div>
      </div>
    </div>
  </div>
</div>

How to test router post in jest?

I would like to test this

router.route('/').post(isValid, registerUser).get(protect);

So I create user.test.js with

const userData = {
  email: faker.internet.email(),
  firstname: 'Kiki',
  lastname: 'bibi',
  age: faker.random.number(),
  password:faker.internet.password(),
};

describe('insert', () => {
  beforeAll(async () => {
    dotenv.config();

    connectDB();
  });
});
describe("POST / register", () => {
it("Should create new user", async () => {
    const validUser = new User(userData);
    const savedUser = await validUser.save();
    const res = await router.route('/').post(isValid, registerUser).get(protect);
    expect(savedUser._id).toBeDefined();
    expect(savedUser.email).toBe(userData.email);
    expect(savedUser.firstname).toBe(userData.firstname);
    expect(savedUser.lastname).toBe(userData.lastname);
    expect(savedUser.age).toBe(userData.age);
});
});
afterAll(async (done) => {
  // Closing the DB connection allows Jest to exit successfully.
  mongoose.connection.close();
  done();
});

But it's not working, it's only connect to the db I would like to try test with isValid because it's test users

  check('email', 'Please include a valid email').isEmail(),
  check('password', 'Password is required')
    .exists()
    .isLength({ min: 8, max: 40 }),
  check('firstname', 'Fisrtname is required').exists().notEmpty(),
  check('lastname', 'Lastname is required').exists().notEmpty(),
  check('age', 'Age must be 13 or more').exists().isInt({ min: 13 }),
  (req, res, next) => {
    const errors = validationResult(req);
    if (!errors.isEmpty())
      return res.status(422).json({ errors: errors.array() });
    next();
  },
];

Anyone can explain me how to test with jest ? Many thanks

Mocking function inside another function using jest

I'd like to mock a function which called by another function using jest but it doesn't work. The mock function is never been called by the parent function. Could you help me please. Thanks in advance.

app.js

const evaluate = (a, b) => {
  return 'Sum is ' + add(a, b);
}

const add = (a, b) => {
  return a + b;
}

exports.evaluate = evaluate;
exports.add = add;

test.js

const evaluateService = require("./app");

describe('Call jumpRequest', () => {

  it('should return Evaluate', async () => {

    evaluateService.add = jest.fn().mockReturnValue(12);
    const res = await evaluateService.evaluate(5, 3);
    expect(res).toEqual("Sum is 12");
 })
})

Can not run testNG.xml after upgrading intelij

Today I updated intelij to the latest version from 2020.2 to 2020.3

In the previous version I could run the testng xml, by pressing right click and run. After the update I could not fing it anymore, how can I run testng.xml in intelij the new version? regards (provided pic of before and after the update) before the update

enter image description here

after the update

enter image description here

TestCafe: The TypeText function seems to use CAPS LOCK for UpperCase letters. Is there a way to tell it to use SHIFT instead?

We have a test that types text into a password field and on our website we indicate a warning to the end user when their CAPS LOCK key is on. We've noticed that passwords which end in an uppercase letter are causing our tests to fail because we don't expect the CAPS LOCK error to be displayed at this time.

Does anyone know if there is a way to tell ".typeText" to use the SHIFT key instead of CAPS LOCK? Or are there alternative ways to enter this text such that the caps lock key won't be used? (I suppose we could send the 'CTRL+V' key combo, but we are hoping not to use that)

Angular Unit testing: Testing http with real APIs

I am writing unit tests in angular CLI using karma. I want to test http calls with real API and not using mocks. Because API models are changing frequently and I'm looking for a way to be sure about my endpoint responses. Is there any solution or sample code?

Here is on my services that I want to test with real API:

  getCodeRequest(phone: string) {

    let params = new HttpParams();
    params = params.append('phone', String(phone));

    return this.http.get('/auth/request', { params });
  }

I will appreciate any tip or point.

How to track a flow of a c program?

I'ts more of a general question: how to handle the execution path of a c program, which consist of multiple functions? For example, i have a following c program:

 void test_if_statement(int num)
{
    if (num > 0)
        //do smth
        ;
    else
        //do smth else
        ;
}

int main(void)
{
    int num;
    printf("enter an int value:");
    scanf("%d", &num);
    test_if_statement(num);
    return 0;
}

Currently i'm using something like this to see where did my function go in if statements:

void test_if_statement(int num)
{
        if (num > 0)
            printf("i'm here\n");
            //do smth
        
        else
            printf("now i'm there\n");
            //do smth else
}

How can I keep it simple and more universal?;) Putting printf in every if-else pair seems unreasonably bulky...

Rails /lib use in fixtures

I have some common code that I have abstracted to a module in my /lib directory:

module ExampleModule
  
  def self.do_something
    # return a value
  end
end

I need to use this method in my test fixtures, however I am getting the following error when I run tests:

Error:

SomeTest#test_that_something:

NoMethodError: undefined method `do_something' for ExampleModule:Module test/fixtures/model_name.yml:4:in `get_binding'

How can I access this method in the test fixtures? I can fix the error by requiring the module in other parts of the app that are loaded during test, but how would I require this module if it was needed specifically in the fixture file here? Would I need to require it in any test using the fixture?

TI TMS570 Series Safety Test Suite

I need a safety test suite for TMS570 series MCU. Do you have any suggestions? I think we can use https://www.qa-systems.com/tools/cantata/ but I am not sure.

We need complete test suite.

Thanks.

Is it possible to initiallly seed several table once before functional testing

I have just used Laravel Permission plugin with Laravel 8, and now, I need to seed 3 tables before each tests Roles, Permissions:

Without this, my functional tests run in 57 secs.

After including seeding of those 2 tables in setUp() method, test runs in 3 minutes and 8 seconds :(

public function setUp(): void
    {
        // first include all the normal setUp operations
        parent::setUp();
//        $this->withoutExceptionHandling();
        // create roles and assign existing permissions
        $this->seed('RoleTableSeeder');
        $this->seed('PermissionTableSeeder');

        $this->company = Company::factory()->create();
        $this->operation = Operation::factory()->create(['company_id' => 1]);
        $this->user = User::factory()->create(
            [
                'email' => 'ju@su.com',
                'company_id' => 1, 
                'operation_id' => 1,
            ]);
        $this->user->assignRole(User::SUPER_ADMIN);
        $this->actingAs($this->user);

    }

Is there a way to initially seed my 3 tables, so that it doesn't seed for each test, so that it doesn't take too long ?

wait function can't inject file, line, not the same on apple/swift-corelibs-xctest

I was trying to refactor my Unit test file to have some asynchronous API, I was trying to have a custom expectation function as followed.

func expect(_ sut: SUT, toCompleteWith: EQUATABLE, when action: () -> Void, file: StaticString = #file, line: UInt = #line) {
    ...
      wait(for: [exp], timeout: 1.0, enforceOrder: false, file: file, line: line)
}

But it seems like the wait method on XCTestCase did not have the file, line as input. And the Apple Document about wait did not have the file either.

Apple Document about wait: https://developer.apple.com/documentation/xctest/xctestcase/2806856-wait

However, I found on apple/swift-corelibs-xctest do have the file, line API

apple/swift-corelibs-xctest about wait: https://github.com/apple/swift-corelibs-xctest/blob/1764c5432ac5a2d8c4de5373fa19ad43c870cf1e/Sources/XCTest/Public/Asynchronous/XCTestCase%2BAsynchronous.swift#L87

To be honest, I don't what to do right now, but here's my thought on this:

  1. I just give up on this and try to import XCTest from GitHub, but it may fail due to the same framework name(just thinking, did not give it a try).
  2. File a bug report, but I really not comfortable with this.

lundi 28 décembre 2020

Multikey press not working in python selenium

I am trying to press Ctrl+s and then click enter on a webpage to save its html. But the multikey pressing functionality is not working out. I have tried way1 and way2 but both didn't work. However, If I do action.send_keys('s') by without executing action.key_down(Keys.CONTROL) before it, it works fine. This is my full code:

from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.keys import Keys
import time

driver = webdriver.Chrome(executable_path="chromedriver.exe")

# get method to launch the URL
driver.get("https://www.google.ca/")
time.sleep(3)

action = ActionChains(driver)

# click google search bar (to make sure driver is working)
driver.find_element_by_name("q").click()
print("passed1")
time.sleep(3)

# way 1
action.key_down(Keys.CONTROL).send_keys('s').key_up(Keys.CONTROL).perform()
print("passed2")

# way 2
action.key_down(Keys.CONTROL)
action.send_keys('s')
action.key_up(Keys.CONTROL)
action.perform()
print("passed2")

time.sleep(100)

driver.close()

Can someone please explain to me what is he issue? I've been trying to figure it out for an hour now.

How to use the value from the json written in the same fixture in the testcafe

I have been trying to use the value from the JSON that I have got added successfully using fs.write() function,

There are two test cases in the same fixture, one to create an ID and 2nd to use that id. I can wrote the id successfully in the json file using fs.write() function and trying to use that id using importing json file like var myid=require('../../resources/id.json')

The json file storing correct id of the current execution but I get the id of first test execution in 2nd execution.

For example, id:1234 is stored during first test execution and id:4567 is stored in 2nd test execution. During 2nd test execution I need the id:4567 but I get 1234 this is weird, isn't it?

I use it like t.typeText(ele, myid.orid)

my json file contains only id like {"orid":"4567"}

I am new to Javascript and Testcafe any help would really be appreciated

single test or a bundle for software

A company asked me to implement a problem and I finished the coding, and it works, but they ask me for tests or bundle, but I have no idea what are these exactly and what they mean by tests? Here are their emails:

< I cannot find a single test or a bundle that a customer could have used, is that intended? >

< the point is that a customer would not be satisfied with a single .py file to solve a problem. To ensure that a algorithm works you need to write tests. If you would have spent money for a developer would you be satisfied with your result? It solves the problem - yes, but… >

it would be great if you could share some examples

Install jpeg 2000 on Windows 10

I want to investigate a new application for JPEG 2000 encoding and decoding. I downloaded openjpeg-master and managed to cobble together the ability to cmake the files. After a bunch of grinding, this resulted in the following output: "Build files have been written to: C: openjpeg-master/build \build> "

Any "normal" Unix installations have a multi-step installation like this:

"UNIX/LINUX - MacOS (terminal) - WINDOWS (cygwin, MinGW) To build the library, type from source tree directory:

mkdir build

cd build

cmake .. -DCMAKE_BUILD_TYPE=Release

make

Binaries are then located in the 'bin' directory.

To install the library, type with root privileges:

make install

make clean To build the html documentation, you need doxygen to be installed on your system. It will create an "html" directory in TOP_LEVEL/build/doc)

make doc"

But the Windows 10 equivalent is unclear, to put the most charitable spin on it. You can find it here: "https://ift.tt/3rz9iFf" Some questions arise:

  1. is there a better starting place for installing JPEG 2000 that actually shows me how to install it and run the tests?
  2. if not, how do I get from the build files to installing the libraries and making the test programs?
  3. Is there more information I can dig out that would help to answer these questions?

Mutex Functionality for Networked Hardware

I have a network-accessible hardware resource in my lab. Various pieces of software want to use this single resource for testing, experiments, etc. The hardware does not support "concurrent" users. Because of this, I need to make sure only one application is accessing it at a time and is not interrupted by another application (so state is maintained on the hardware).

If this were purely a software problem, I might look to implementing a Mutex to get this mutually-exclusive access control implemented. However, I cannot seem to find a multi-application, networkable solution.

For clarity, the general setup is this:

  • Hardware Asset, Application 1, and Application 2 exist at three different locations on the same network.
  • Application 1 and 2 are implemented using different (arbitrary) programming languages (consider C++ and Python) but both wish to interact with Hardware.
  • Application 1 and 2 are running on different computers under different operating systems.
  • "Hardware" is provided by a third party. It has an API but is otherwise a black box that I have no ability to modify.

For clarity, the workflow would ideally be:

  • Application 1 acquires a mutex for the Hardware over the network.
  • Application 1 holds the mutex and begins doing work.
  • Application 2 queues up over the network for the hardware mutex, but it is not yet available.
  • Application 1 finishes its work (perhaps a few minutes to a few hours later) and releases the Mutex.
  • Application 2 acquires the Mutex and begins its work.

(I don't have any specific requirements for notification vs. polling when it comes to acquiring a Mutex.)

I do not want to reinvent the wheel, but I'm also afraid I don't know what this type of Mutex control is called. I could see a helper-application doing the work here and blocking access by other applications as well. Any help here would be appreciated.

Angular 2 integration Observable that is consumed by html testing

Problem Component have infinite loading screen. How to trigger observable to complete?

In service I have simple method that returns observable from http GET request

This methods reference is assigned for rentalPoints$

And later consumed by html component by using async pipe. And loading indicator is displayed while loading.

Component.ts

ngOnInit(): void {
    // Storing observable with rental points
    this.rentalPoints$ = this.rentalPointService.getRentalPoints();
}

Component.html

 <ng-container *ngIf="rentalPoints$ | async as rentalPoints; else loading">
    <div class="points">
        <h4></h4>
    </div>
</ng-container>

<ng-template #loading>
    <div class="text-center h4">
        <i class="fas fa-spinner fa-spin"></i>
    </div>
</ng-template>

Test component

// Creating mocked cabinet service
const RentalPointServiceStub = {
    // Creating method for cabinet mocked service
    getRentalPoints() {
        // Returning observable with loyalty points information
        return scheduled([{
            pendingAmount: 1,
            availableAmount: 2
        }]
        , asyncScheduler);
    }
};

    describe('RentalPointsComponent', () => {
        // Declaring test variables
        let component: RentalPointsComponent;
        let fixture: ComponentFixture<RentalPointsComponent>;
        let debugElement: DebugElement;
        let nativeElement: any;

    // Setting up Test Bed ( Testing envirement )
    beforeEach(async () => {
        TestBed.configureTestingModule({
            imports: [
            ],
            providers: [
                { provide: RentalPointsService, useValue: RentalPointServiceStub },
            ],
            declarations: [
                RentalPointsComponent,
            ]
        }).compileComponents();
    });

    // Reseting variables each test
    beforeEach(() => {
        fixture = TestBed.createComponent(RentalPointsComponent);
        component = fixture.componentInstance;
        debugElement = fixture.debugElement;
        nativeElement = debugElement.nativeElement;
        fixture.detectChanges();
    });

       it('should show indicator while loading, and render view after', fakeAsync(() => {
        expect(debugElement.query(By.css('.fa-spinner')).nativeElement).toBeTruthy();
        tick(50000); // doesnt finish observable
        flush();  // doesnt finish observable
        fixture.detectChanges();
        // expect(nativeElement.query(By.css('.points')));
    }));
});

Convert data class to map to test http GET response body

I'm trying to test a GET to get all the StatusMapping objects created, however, I'm not sure what's the best approach to test this.

The response is returning a map whereas I was expecting a list of StatusMapping objects instead. Should I convert the requests to a map?


Here's the Service code:

fun getAll(): ResponseEntity<List<StatusMapping>> {
    return ResponseEntity<List<StatusMapping>>(statusMappingRepository.findAll(), HttpStatus.OK)
}

Here's the test

@Test
fun `Get all mappings created`() {
    val requests = listOf(
        StatusMapping("available", "available"),
        StatusMapping("unavailable", "unavailable")
    )
    requests.forEach { statusMappingService.createMapping(it.toStatusMappingRequest()) }

    val response = restTemplate.getForEntity(getRootUrl(), List::class.java)

    assertEquals(response.body, requests)
}

Here's the error that I'm getting:

Expected :[{source=available, target=available}, {source=unavailable, target=unavailable}]
Actual   :[StatusMapping(source=available, target=available), StatusMapping(source=unavailable, target=unavailable)]

Advanced Jmeter Query

I've tested a few functionalities of my application using Jmeter and observed it's performance. My question is that now that I have all the numerical data, can i further find stuff such as exactly what code, library, memory is slowing down the system? Or does Jmeter only provide numerical data?

Jira creating Test repository for Product

Hi I'm new in Jira and want to create a Test repository for one product without using any add-on.

This test repo will include all generic test cases and custom ones for each customer. I wont execute them in Jira but will use Jira to keep test details(test steps, pre-post conditions, result etc.) My goal is to create Parent issue/folder for main Product on top and under that there will be sub folders/items like Generic-Tests, Customer1 , Customer2 etc. that includes related tests in each.

At first was planning to create Epic and under that some sub items but not looks fine, so now stuck.

Do you have any solution in order to create that type of test repo in Jira?

how to create golang test code that needs to use the database?

I want to publish my first go library on GitHub

i created a scan_test.go which has many tests that connect to a postgresql database, it doesn't need any data, only a valid connection since it tests result of static query result for example select 1 union select 2.

so I to release the package and that the tests would work, how do I allow configuration for the database for the tests ? one idea that comes up is to use env variables? but what's the official way? how to properly create a test for my project?

example of my test file:

const (
    host     = "localhost"
    port     = 5432
    user     = "ufk"
    password = "your-password"
    dbname   = "mycw"
)
type StructInt struct {
    Moshe  int
    Moshe2 int
    Moshe3 []int
    Moshe4 []*int
    Moshe5 []string
}

func TestVarsInStructInJsonArrayWithOneColumn(t *testing.T) {
    if conn, err := GetDbConnection(); err != nil {
        t.Errorf("could not connect to database: %v", err)
    } else {
        sqlQuery := `select json_build_array(json_build_object('moshe',55,'moshe2',66,'moshe3','{10,11}'::int[],'moshe4','{50,51}'::int[],
    'moshe5','{kfir,moshe}'::text[]),
                        json_build_object('moshe',56,'moshe2',67,'moshe3','{41,42}'::int[],'moshe4','{21,22}'::int[],
                                          'moshe5','{kfirrrr,moshrre}'::text[])) as moshe;`
        var foo []StructInt
        if isEmpty, err := Query(context.Background(), conn, &foo, sqlQuery); err != nil {
            t.Errorf("failed test: %v", err)
        } else if isEmpty {
            log.Fatal("failed test with empty results")
        }
        if foo[0].Moshe != 55 {
            t.Errorf("int slice test failed 21 <> %v", foo[0].Moshe)
        }
        if foo[1].Moshe2 != 67 {
            t.Errorf("int slice failed with 82 <> %v", foo[1].Moshe2)
        }
        if len(foo[1].Moshe3) != 2 {
            t.Errorf("int silice failed, array size should be 2 <> %v", len(foo[1].Moshe3))
        }
        if foo[1].Moshe3[1] != 42 {
            t.Errorf("int slice failed, moshe3[0] not 2 <=> %v", foo[1].Moshe3[1])
        }

        if len(foo[1].Moshe4) != 2 {
            t.Errorf("int silice failed, array size should be 2 <> %v", len(foo[1].Moshe4))
        }
        if *foo[1].Moshe4[1] != 22 {
            t.Errorf("int slice failed, moshe4[1] not 4 <=> %v", foo[1].Moshe4[1])
        }

    }

}
func GetDbConnection() (*pgxpool.Pool, error) {
    psqlInfo := fmt.Sprintf("host=%s port=%d user=%s "+
        "password=%s dbname=%s sslmode=disable",
        host, port, user, password, dbname)
    return pgxpool.Connect(context.Background(), psqlInfo)
}

thanks

How to find aliases in Global Jest setup?

I want to run some stuff only once before all test cases. Therefore, I have created a global function and specified the globalSetup field in the jest configuration:

globalSetup: path.resolve(srcPath, 'TestUtils', 'globalSetup.ts'),

However, within globalSetup, I use some aliases @ and Jest complains it does not find them.

How can I run globalSetup once the aliases have been sorted out?

My Jest configuration is as follows:

module.exports = {
  rootDir: rootPath,
  coveragePathIgnorePatterns: ['/node_modules/'],
  preset: 'ts-jest',
  setupFiles: [path.resolve(__dirname, 'env.testing.ts')],
  setupFilesAfterEnv: [path.resolve(srcPath, 'TestUtils', 'testSetup.ts')],
  globalSetup: path.resolve(srcPath, 'TestUtils', 'globalSetup.ts'),
  globals: {},
  testEnvironment: 'node',
  moduleFileExtensions: ['js', 'ts', 'json'],
  moduleNameMapper: pathsToModuleNameMapper(compilerOptions.paths, { prefix: '<rootDir>/' })
};

When I run testSetup before every test, it runs ok the aliases, but this does not happen with globalSetup.

Any clue what could I do?

dimanche 27 décembre 2020

Factory_bot, how to reuse parent factory in relationship where child belongs to both parent and parents parent?

Shift belongs_to both Employee and Site while Employee belongs_to Site

Given that, here is a factory:

  factory :site do
    ...
  end
  
  factory :employee do
    site
    ...
  end

  factory :shift do
    employee # creates a site
    site # creates a new site; i.e., this is NOT same site as employee site, each call creates a new site
    ...
  end

When I run create(:shift), I'd like the default created shift.site to be the same as the shift.employee.site, but currently the behavior is that 2 sites are created, when I only want one.

Thanks,

Rspec raise matcher not working, copied syntax from docs?

rspec-core (3.9.1)
rspec-expectations (3.9.0)
rspec-mocks (3.9.1)
rspec-rails (4.0.0.beta4, 3.9.0)
rspec-support (3.9.2)

According to docs: https://relishapp.com/rspec/rspec-expectations/v/3-9/docs/built-in-matchers/raise-error-matcher, this should work:

expect { raise StandardError }.to raise_error

Yet in my code, when I run JUST that spec by itself, I get:

Failures:

  1) time rules should work
     Failure/Error: expect(raise StandardError).to raise_error
     
     StandardError:
       StandardError
     # ./spec/models/time_rules_spec.rb:87:in `block (2 levels) in <top (required)>'

how can resolve this error ( No provider for InjectionToken angularfire2.app.options!) in angular test about firestore?

in my angular test i have this problem in some componente and in the service where i use firestore. for example this is a error in my itemservice: NullInjectorError: R3InjectorError(DynamicTestModule)[ItemService -> AngularFirestore -> InjectionToken angularfire2.app.options -> InjectionToken angularfire2.app.options]: NullInjectorError: No provider for InjectionToken angularfire2.app.options!

import { Injectable } from '@angular/core';
import {
  AngularFirestore,
  AngularFirestoreCollection,
} from '@angular/fire/firestore';

import { Item } from '../../environments/item';
import { Observable } from 'rxjs';
@Injectable({
  providedIn: 'root',
})
export class ItemService {
  itemsCollection: AngularFirestoreCollection<Item> | undefined;
  items: Observable<Item[]>;
  idField: any;
  itemDoc: any;
  constructor(public afs: AngularFirestore) {
    this.items = this.afs
      .collection<Item>('items')
      .valueChanges({ idField: 'id' });
    this.itemsCollection = this.afs.collection<Item>('items');
  }
}

and here is my app-module.ts


import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { FormsModule } from '@angular/forms';
import { ReactiveFormsModule } from '@angular/forms';

import { AppRoutingModule } from './app-routing.module';
import { AppComponent } from './app.component';
import { environment } from '../environments/environment';
import { AngularFireModule } from '@angular/fire';
import {
  AngularFirestore,
  AngularFirestoreModule,
} from '@angular/fire/firestore';
import { ItemsComponent } from './components/items/items.component';
import { ItemService } from './services/item.service';
import { ItemComponent } from './components/item/item.component';
import { RouterModule } from '@angular/router';
import { NavComponent } from './components/nav/nav.component';
import { CartComponent } from './components/cart/cart.component';
import { CartService } from './services/cart.service';

import { AddItemComponent } from './components/add-item/add-item.component';
import { AngularFireDatabaseModule } from '@angular/fire/database';
@NgModule({
  declarations: [
    AppComponent,
    ItemsComponent,
    ItemComponent,
    NavComponent,
    CartComponent,
    AddItemComponent,
  ],
  imports: [
    BrowserModule,
    AppRoutingModule,
    AngularFirestoreModule,
    AngularFireDatabaseModule,
    AngularFireModule.initializeApp(environment.firebase, 'project2'),
    RouterModule.forRoot([
      { path: '', component: ItemsComponent },
      { path: 'details/:id', component: ItemComponent },
      { path: 'cart', component: CartComponent },
      { path: 'newitem', component: AddItemComponent },
    ]),
    FormsModule,
    ReactiveFormsModule,
  ],
  providers: [CartService, ItemService],
  bootstrap: [AppComponent],
})
export class AppModule {}

Response code:400 Response message:Bad Request Jmeter api testing with JIRA with POST method

when try to create issue in jira using the Jmeter api with POST method, it does not allow to create issue. And show below error. I uploaded my all configuration image.

can anybody correct me?

Thanks in advance.

2020-12-27 19:48:35,841 ERROR o.a.j.p.h.s.HTTPJavaImpl: readResponse: java.io.IOException: Server returned HTTP response code: 400 for URL: https://learntestapi.atlassian.net/rest/api/2/issue 2020-12-27 19:48:35,841 ERROR o.a.j.p.h.s.HTTPJavaImpl: Cause: java.io.IOException: Server returned HTTP response code: 400 for URL: https://learntestapi.atlassian.net/rest/api/2/issue

enter image description here
enter image description here

In Go is 'go build -race main.go' useless?

Taking this example I stole from a website :

package main
import "fmt"

func main() {
    i := 0
    go func() {
        i++ // write
    }()
    fmt.Println(i) // concurrent read
}

Nothing is detected with build :

user@VM:~$ go build -race main.go 
user@VM:~$ 

But, when running it :

user@VM:~$ go run -race main.go 
0
==================
WARNING: DATA RACE
Write at 0x00c00001c0b0 by goroutine 7:
  main.main.func1()
      /home/user/main.go:8 +0x4e

Previous read at 0x00c00001c0b0 by main goroutine:
  main.main()
     /home/user/main.go:10 +0x88

Goroutine 7 (running) created at:
  main.main()
      /home/user/main.go:7 +0x7a
==================
Found 1 data race(s)
exit status 66

So my question is, when is -race used with build better than run or test ?

Sinon - Mocking anonymous function that's called on import

Is there a way to stub a anonymous function that's called on import.

// a.js

function init() {
    // code here
}

init(); // this function is called

export function anything() {}

and another file that imports this file

// b.js
import anything from "a.js";

The problem I'm having is that I need to stub the init function in a test file that tests b.js. I can't export it as well and then instantiate it in the b.js.

Is there a way to mock the init function? I've tried using some suites that would help with mocking but the problem is that it seems you have to import the file before stubbing the function.

proxyquire (using es6 imports) and rewiremock won't work.

Verilog - Soft CPUs - how to debug and test in a sane way?

I'm currently working on a Verilog implementation of the CHIP-8 console, would like to implement a RISC-V softcore next. I'm still a bit green with hardware design and HDLs, coming from the software development world. I've developed the emulators for these architectures in C beforehand to get more familiar, but the HDL is of course a completely different territory.

So far I've been testing the opcodes in a crude way - generating a RAM with the program binary loaded at a correct address, executing several instructions and then looking at the waveform from simulation, spending some mental energy to assert whether the state is correct after various clock cycles.

I suppose this really doesn't scale well, so:

How does one test (and debug) the design in a reasonable way?

When working on a software emulator, I was able to use a battery of tests (handwritten or assembled to binary) that followed a certain structure - for example set up a special register to mark a success or a failure state and a invoke a system call opcode to state that the test is finished. However, as these tests relied on several fundamental instructions (branching, storing a value into a register, syscall), I needed a way to make sure these work first.

To assert that the fundamental instructions work correctly I loaded a tiny test binary directly in the test suite, and peeked into the CPU/memory internals within the software-based emulator after the instruction executed to validate the resulting internal state.

Does this kind of approach translate to hardware as well?

I suppose I could write a script that would generate a testbench per each test case, that would load the program binary into the RAM, reset the CPU, run the clock until we get the special call, then use Verilog assertions on the final state of the "result register". This way I would end up with a bunch of testbenches that I could then run in sequence with Modelsim or Icarus Verilog.

Is there a better way of testing soft CPUs?

samedi 26 décembre 2020

How to use dataProvider against multiple testng method using selenium testng code

could you please suggest what could be the issue with my code , After using dataprovider my test method Test2 and Test3 both are looks like unreachable,can you provide me the solution that how i could fix this issue without putting all the code into single Test method

  1. while using data provider i'm only able to reach Test1 code means after executing my code i am getting below output (see below output)

OUTPUT

Before Test case

Print Test1

where my expected output should be (see below out put)

OUTPUT

Before Test case

Print Test1

Print Test2

Print Test2

public class Test extends DriverConfig {
    
    @DataProvider
    public Iterator<String> getTestData() throws Exception {
        driver.manage().timeouts().implicitlyWait(12, TimeUnit.SECONDS);
        ArrayList<String> getclientProduct = sftpCon.clientProduct("Client Type");
        System.out.println("getclientProduct----------" + getclientProduct);
        System.out.println("Before Test case");
        return getclientProduct.iterator();
    
    }
    
    @Test(priority = 1, retryAnalyzer = com.brcc.tool.RetryFailedTestCases.RetryTestCases.class,dataProvider = "getTestData")
    public void A1(String clientName,String clientAddress) throws Exception {

        wait = new WebDriverWait(driver, 10);
        driver.manage().timeouts().implicitlyWait(20, TimeUnit.SECONDS);
        System.out.println("Print Test1");
    }
    @Test(priority = 2, retryAnalyzer = com.brcc.tool.RetryFailedTestCases.RetryTestCases.class)
    public void A2() throws Exception {
        wait = new WebDriverWait(driver, 80);
        System.out.println("Print Test2");
        
    }

    @Test(priority = 3, retryAnalyzer = com.brcc.tool.RetryFailedTestCases.RetryTestCases.class)
    public void A3() throws Exception {
        wait = new WebDriverWait(driver, 10);
        System.out.println("Print Test3");
        
    }

}

How to use dataprovider to run multiple test cases in testng with different set of test data from excel file using

Anyone could you please suggest what could be the issue with my code , After using dataprovider my test method Test2 and Test3 both are looks like unreachable,can you provide me the solution that how i could fix this issue without putting all the code into single Test method

  1. while using data provider i'm only able to reach Test1 code means after executing my code i am getting below output (see below output)

OUTPUT

Before Test case

Print Test1

where my expected output should be (see below out put)

OUTPUT

Before Test case

Print Test1

Print Test2

Print Test2

public class Test extends DriverConfig {
    
    @DataProvider
    public Iterator<String> getTestData() throws Exception {
        driver.manage().timeouts().implicitlyWait(12, TimeUnit.SECONDS);
        ArrayList<String> getclientProduct = sftpCon.clientProduct("Client Type");
        System.out.println("getclientProduct----------" + getclientProduct);
        System.out.println("Before Test case");
        return getclientProduct.iterator();
    
    }
    
    @Test(priority = 1, retryAnalyzer = com.brcc.tool.RetryFailedTestCases.RetryTestCases.class,dataProvider = "getTestData")
    public void Test1(String clientName,String clientAddress) throws Exception {

        wait = new WebDriverWait(driver, 10);
        driver.manage().timeouts().implicitlyWait(20, TimeUnit.SECONDS);
        System.out.println("Print Test1");
    }
    @Test(priority = 2, retryAnalyzer = com.brcc.tool.RetryFailedTestCases.RetryTestCases.class)
    public void Test2() throws Exception {
        wait = new WebDriverWait(driver, 80);
        System.out.println("Print Test2");
        
    }

    @Test(priority = 3, retryAnalyzer = com.brcc.tool.RetryFailedTestCases.RetryTestCases.class)
    public void Test3() throws Exception {
        wait = new WebDriverWait(driver, 10);
        System.out.println("Print Test3");
        
    }

}

Multy-Tenancy test case for testing

I have no knowledge of writing a case test for multi tenancy and I need a few case tests to get acquainted to start writing a case test

Java Test for a friend [closed]

Hello there guy i have a friend which is sisco ccna and he is trying to pass some exams and they have the tasks bellow,i kindly ask an assistan to help here

1) **

Write in Java a program that will read integers and will print their square (x2 ) Until it meets 0. At the end the program must print the sum of the positives separately numbers and the sum of the negative numbers he read. THE calculation of the square to be done by definition and use corresponding function. I think for a developer they are very easy ...

2) Write the code of an application that accepts numbers between 10 and 100. Numbers will be recorded but only displayed if they are not copies of a number already recorded. use it as few table positions as possible to solve the problem. Display the full set of unique input values. ** Thank you guys!

MemberData throws exception when property throws exception

There is the following code:

public class User
{
    private Address _address;

    public string Name { get; set; }
    public int Age { get; set; }
    public int Score { get; set; }
    public bool HasAddress => _address != null;
    public Address Address => _address ?? throw new Exception();
}

public class Address
{
    // ...
}

And Score_ReturnsCorrectValue unit-test to test User:

public class UserTests
{
    [Theory]
    [MemberData(nameof(ReturnsCorrectValueTestCases))]
    public void Score_ReturnsCorrectValue(User user, int expectedScore)
    {
        // Act
        // ...

        // Assert
        user.Score.Should().Be(expectedScore);
    }

    public static IEnumerable<object[]> ReturnsCorrectValueTestCases()
    {
        var user1 = new User { Age = 65 };
        var user2 = new User { Age = 21 };

        yield return new object[]
        {
            new object[] {user1, 100},
            new object[] {user2, 50}
        };
    }
}

xUnit runner failed on User.Address property:

enter image description here

So, look like, MemberData works via reflection and it doesn't like such property which can throw an exception.


I'm new to MemberData attribute. Is it possible to change the unit-test in some way without changing User.Address logic?

Why iBookService's injection failed in springBoot framwork? thanks

@Service
public interface IBookService {

    public void getDetailInfo(Book book);

    public void getBackBook(Book book);

    public void buyBook(Book book);

}

————————————————————————————————

@SpringBootApplication(exclude = {
        DataSourceAutoConfiguration.class,
        DataSourceTransactionManagerAutoConfiguration.class,
        HibernateJpaAutoConfiguration.class,
        MongoAutoConfiguration.class})
public class Application {

————————————————————————————————

public class AOPTest extends BaseTest {
    @Autowired
    IBookService iBookService;

}

enter image description here

Why iBookService's injection failed in springBoot framwork? thanks.

If statment inside cypress.io test

I'm new to testing and to cypress.io in particular, and I'm trying to test the registering flow in my app.

I want to check two scenarios -

  1. If the user is trying to register with an existsting username, an error message is popping up.
  2. Else - If proper inputs are inserted, the username is successfully registering.

How can I do so in one test ? there is an option to use if statement with cypress ?

Thanks a lot in advance.

How to write test scenario/ outline in Karate for Angular service API call

I have created Karate test suite for testing API's created in REST, but i am unable to figure out how to do the same for API's written in Angular. Can anyone please provide me with a sample test for scenario/scenario outline for testing Angular API.

Using a dependent project for test objects

Assume that I have multiple java projects, all of them use maven: Project A and Project B

  • Project A is a library

  • Project B is an application which uses Project A

Both projects are developed simultaneously. New requirements from Project A are fulfilled with a new release of Project B.

I mostly tested my code manually which - as expected - yielded less than optimal code coverage and therefore quality.

I'm therefore currently looking into using Junit to write tests for my projects. Since Project A mostly defines interfaces, I would have to - at least I believe so - write implementations of these classes in the test code.

As I already have implementations of those interfaces defined in Project B I figured I could use Project B as test dependency in Project A, which at first glance would make perfect sense as nearly all features of Project A are used in those implementations, as I would (at least for now) not develop a feature in Project A that is not used in Project B (Keep in mind: this might change for future projects e.g. a newly created Project C, which might also depend on Project A).

On the other hand this would create some sort of circular dependencies. In Project B's pom.xml I would define a regular dependency on Project A, while Project A would have a dependency on Project B in the test-scope.

Therefore the following questions:

  • Is this even technically possible or would I run into issues with the circular dependencies right from the start or at a later point in time? (I believe I've seen someone doing something similar, where they had to define the correct version of Project A in Project A's <dependencyManagement> section to make sure the current code is used while testing and not the version defined in Project B. I can't find it anymore though, so I can't verify this, but I remember being confused on why they would define their own projects version in that section. Edit: I believe I found it again, but it seems to be something else, I was propably just kind of confused while researching junit and testing ;) )
  • Is it desirable to use a downstream project for classes for a test case or are there issues I could face at a later point in time? (My first thought would be that I'd run into a problem as soon as I replace Project A with a third party dependency in Project B, therefore loosing all my test cases for Project A)

vendredi 25 décembre 2020

Ways to get compregensive report of multiple collections using newman/postman?

I have multiple Test collections. I am running all of them using Newman commands But they generate separate reports. I want to get an overview of all the reports as to how many tests passed and how many failed.

Python class object is not callable

I'm trying to run many tests in a code that I made for test the concept of linked lists. In the first test, I'm getting this error:

Traceback (most recent call last):
  File "LinkedList_test.py", line 14, in test_insert_first_node_to_head
    self.assertEqual('head', self.linked_list.head().value)
TypeError: 'Node' object is not callable

The LinkedList code:

class Node():
    def __init__(self, value = None):
        self.value = value
        self.next = None
        self.previous = None

class LinkedList():
    def __init__(self, head = None):
        self.head = head

    def append_node_to_tail(self, new_element):
        current = self.head
        if self.head:
            while current.next:
                current.next.previous = current
                current = current.next                    
            current.next = new_element
        else:
            self.head = new_element
               
    def tail():
        current = self.head
        while current.next:
            current = current.next
        return current

   ...other methods

The test code:

import unittest
from LinkedList import LinkedList, Node

class TestLinkedList(unittest.TestCase):
    def setUp(self):
        self.linked_list = LinkedList()

    def test_insert_first_node_to_tail(self):
        self.linked_list.append_node_to_tail(Node('head'))
        self.assertEqual('tail', self.linked_list.tail().value)

   ...other tests
if __name__ == '__main__':
    unittest.main()

Someone know what I'm doing wrong?

How to mock "NOW()" for TypeORM MySQL?

I use Jest for testing and TypeORM as ORM, MySQL is the database.

I have a time-dependent query, which will return the results if the DAY(NOW()) is lesser than 27.

.andWhere('DAY(NOW()) <= 27')

I don't want my tests to break on 28th, how should I mock/replace it on Database?

Is there more elegant way?

Thank you

Python Script for Running Hackerrank tests locally

I found this script for running Hackerrank tests locally. According to the instructions, I should "Just run it in the sample test cases folder that you downloaded from hackerrank."

I have not yet been able to get it to work. I'm not sure what arguments I should pass, and also whether I should unzip the test files first (the code uses rb mode so maybe the zipped ones?)

Can anyone please provide more detail about how to get this working?

# to use with a python3 tested on windows
import subprocess
import sys
cmd = [
    'python',
    sys.argv[1]
]
process = subprocess.Popen(
        cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
        stdin=subprocess.PIPE, shell=True)

input = open(sys.argv[2], "rb").read()
out_file = open(sys.argv[3], "rb").read()
out, err = process.communicate(input)

if err:
    print(err.decode("utf-8"))

out = out[:-1]
if out == out_file:
    print("WIN")
print(out.decode("utf-8"))

load testing through wifi/lan + vpn

I really did not found answer on my question in the web. I am currently doing load tests for a web service, for example: how service will handle 15 threads in 1 seconds, for that I use Jmeter. I always get different average response times for 15 threads. When I'm in my company's inner network, I get wonderful results, but when I am at home, using lan/wifi + vpn to get access to that web services, I get horrible results. When I test it through vpn, web service can not handle 30 threads in 1 seconds, average response time is like 13 seconds, otherwise from company's network, average response time is 4-5 seconds. Also, that web service, would be also called from a system using vpn. My question is, what is correct result and correct way to test it. Test it from company's network, or though vpn?

How to mock or stub an external request, not necessarily an http one?

I want to mock or stub an external request. It’s not a simple HTTP request and there’s no direct access to an underlying library, nor information of how it’s implemented under the hood, therefore it should be treated as some external request that can be http, file, database. A simplified code is as follows:

def my_external_call
  Lib1::get_some_data()
end

I know what my_external_call() can return. How do I mock or stub it?

What is the time complexity of this for-loop case?

I'm trying to figure out what is the time complexity of the following for-loop case.

double n; //n > 0 and a real number

for(double i=1; i<n; i*=(1+(1/n))){//some O(1) statements};

I argued with my professor that the time complexity is still O(log n). But he says it's not so I'm confused. In for(double i=1; i<n; i*=2){//some O(1) statements};, the time complexity is indeed O(log n). Why isn't the former the same? Please help.

Difference between Laravel testing method assertRedirect($uri) and assertLocation($uri)?

I was reading Laravel document HTTP tests and a question occurred. I can't tell the difference between assertLocation($uri) and assertRedirect($uri), since both are for redirecting to specific uri.

Anyone could help would be so much appreciated.

jeudi 24 décembre 2020

How to check how many times a function has been called in a Golang test?

I'm working on a web server with an API which calls a function. This function does a heavy job and caches the result. I've designed it in a way that if there is no cache and multiple users call this API with the same parameters at the same time, the server call that function only once for the first request and all other requests wait for finishing the job and return response from the cache.

I've written a test for it in this way:


func TestConcurentRequests(t *testing.T) {
    var wg sync.WaitGroup
    for i := 0; i < 10; i++ {
        wg.Add(1)
        go func() {
            // do the request
            wg.Done()
        }
    }
    wg.Wait()
}

I can check that my code works correctly by printing a value inside that heavy function and check the console to see if it appears only once, but I'm searching for a way to fail the test if this function has been called more times. Something like this:


if n := NumberOfCalls(MyHeavyFunction); n > 1 {
    t.Fatalf("Expected heavy function to run only once but called %d times", n)
}

Why Rust reset some variables after a loop?

I finished reading The Rust Programming Languange some time ago and I'm creating a simple http-server for the sake of teaching myself Rust.

In a function meant to parse a &[u8] and create a HttpRequest object, I noticed that a loop that should be parsing the headers is actually not updating some variables used to track the state of parsing:

pub fn parse_request(buffer: &[u8]) -> Result<HttpRequest, &'static str> {
    let (mut line, mut curr_req_reader_pos) = next_req_line(buffer, 0);

    let (method, path, version) = parse_method_path_and_version(&line)?;

    let (line, curr_req_reader_pos) = next_req_line(buffer, curr_req_reader_pos);
    //eprintln!("--- got next line: {}, {} ---", curr_req_reader_pos, String::from_utf8_lossy(line));
    let mut headers = vec![];
    let mut lel = 0;
    while line.len() > 0 {
        //eprintln!("LOOP");
        let header = parse_header(&line);
        eprintln!("--- parsed header: {} ---", String::from_utf8_lossy(line));
        headers.push(header);
        let (line, curr_req_reader_pos) = next_req_line(buffer, curr_req_reader_pos);
        //eprintln!("--- got next line: {}, {} ---", curr_req_reader_pos, String::from_utf8_lossy(line));
        let (line, curr_req_reader_pos) = next_req_line(buffer, curr_req_reader_pos);
        //eprintln!("--- got next line: {}, {} ---", curr_req_reader_pos, String::from_utf8_lossy(line));
        //eprintln!("LOOP");
        lel += 10;
        //break;
    }

    let (line, curr_req_reader_pos) = next_req_line(buffer, curr_req_reader_pos);
    //eprintln!("--- got next line: {}, {} ---", curr_req_reader_pos, String::from_utf8_lossy(line));
    //eprintln!("{}", lel);

    let has_body;
    match method.as_ref() {
        "POST" | "PUT" | "PATCH" => has_body = true,
        _ => has_body = true
    };

    let body;
    if has_body {
        let (line, curr_req_reader_pos) = next_req_line(buffer, curr_req_reader_pos);
        body = String::from_utf8_lossy(line).to_string();
    } else  {
        body = String::new();
    }

    Ok(HttpRequest {
        method,
        path,
        version,
        headers,
        has_body,
        body
    })
}

There's also tests for each method:

#[test]
fn test_next_req_line() {
    let req = "GET / HTTP/1.1\r\nContent-Length: 3\r\n\r\n\r\n\r\nlel".as_bytes();

    let (mut lel, mut pos) = next_req_line(req, 0);
    assert_eq!("GET / HTTP/1.1", String::from_utf8_lossy(lel));
    assert!(lel.len() > 0);
    assert_eq!(16, pos);

    let (lel, pos) = next_req_line(req, pos);
    assert_eq!("Content-Length: 3", String::from_utf8_lossy(lel));
    assert!(lel.len() > 0);
    assert_eq!(35, pos);

    let (lel, pos) = next_req_line(req, pos);
    assert_eq!("", String::from_utf8_lossy(lel));
    assert!(lel.len() == 0);
    assert_eq!(37, pos);

    let (lel, pos) = next_req_line(req, pos);
    assert_eq!("", String::from_utf8_lossy(lel));
    assert!(lel.len() == 0);

    let (lel, pos) = next_req_line(req, pos);
    assert_eq!("", String::from_utf8_lossy(lel));
    assert!(lel.len() == 0);

    let (lel, pos) = next_req_line(req, pos);
    assert_eq!("lel", String::from_utf8_lossy(lel));
    assert!(lel.len() == 3);
}

    #[test]
    fn test_parse_request() {
        let mut request = String::from("GET / HTTP/1.1\r\n");
        request.push_str("Content-Length: 3\r\n");
        request.push_str("\r\n");
        request.push_str("lel");
        let request = request.as_bytes();

        match parse_request(&request) {
            Ok(http_request) => {
                assert_eq!(http_request.method, "GET");
                assert_eq!(http_request.path, "/");
                assert_eq!(http_request.version, "HTTP/1.1");
            },
            Err(_) => {
                println!("Failed");
            }
        }
    }

The commented lines are just to show what's happening, and that's where I saw this output:


    Finished test [unoptimized + debuginfo] target(s) in 0.53s
     Running target/debug/deps/rweb-6bbc2a3130f7e3d9

running 4 tests
test http::testing::test_next_req_line ... ok
test http::testing::test_parse_header ... ok
test http::testing::test_parse_method_path_and_version ... ok
--- parsed method path version  ---
--- got next line: 35, Content-Length: 3 ---
LOOP
--- parsed header: Content-Length: 3 ---
--- got next line: 37,  ---
--- got next line: 39, lel ---
LOOP
--- got next line: 37,  ---
10
test http::testing::test_parse_request ... ok

test result: ok. 4 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out

It seems that the while loop is not re-assigning the line and curr_req_reader_pos variables, but for me it made perfect sense to expect that the loop would update these variables and parse every header. However, it works perfecly outside a loop, as anyone can see in the tests.

I can't figure out why this happens with my current undertanding of Rust, why does it happen?