mercredi 31 octobre 2018

Meaning of terms after analysis in Locust Load Testing

I am using Locust (Modern Open Source Load Testing Tool) for Load testing of APIs.
As it is simulation + analysis tool, I am not able to understand some of the terms.
Below is the screen-shot of the test which I have done on the API.

Test Run

Terms I want to know about:

  1. The relationship between Number of users to simulate Vs Hatch Rate (Users Spawned/Second).
  2. From the above image, the meaning of Median (ms), Average (ms), Content Size(bytes).
    1. min_wait, max_wait, the name of the variables which we override with own values in the WebsiteUser(HttpLocust) class, the significance of min_wait, max_wait.

Next, in the Charts tab, Locust shows 3 graphs, namely (Total Requests/Second, Response Time (ms), Number of Users).
Not able to make sense of this charts,
In Total Requests/Second, should I look at peaks with respect to time as it isNumber of Users Vs Time Graphs`, how do I make sense of all the charts.

Thank You.

Its a broad question, but I need to know about this terms, as for better understanding of the graphs, data which I get after doing the analysis.

Rails test fixtures only available when running specific test file

So I have a test set using Minitest (no rspec) with some fixtures and everything is working fine. It still works fine... but only if I run that test file directly rails test <test_file>.rb.

If I just run rails test, one of the fixtures, not all of them, shows up as nil. Again, I know it's specified correctly with no apparent syntax errors because it passes when I run it directly. My test helper already has fixtures :all. Not sure there's any code to paste, since it works under the right conditions and effects all of the tests.

The only other change that was made at the time this error appeared was moving the test file to a subfolder in tests/controllers/ to tests/controllers/api/ since it is an api test. But it can still access all the other fixtures.

What could possible cause a test to fail when not run directly?

Does anyone wanna help me test my python module?

So, I'm pretty sure this doesn't belong here but... So, I'm a senior in HS going into college for CST software development and I'm an intermediate python programmer. I'm currently working on a "game engine" for text-based RPG games. The reason for the quotes around 'game engine' is because I'm pretty sure it's not considered an engine, but idk what to call it, so. At this current time, it is in Alpha v0.5.0. I have plans for the beta version that will make it more of an 'engine'. I'm planning to re-write the entire project. Anyway, it's supposed to have very little limitation. In a way, it's more of a template, but it's really a project for practice. I need some testers/people to try to develop with it. I need help finding bugs and I need suggestions for features. You can install the module with pip (pip install TARBS-Engine). Here is a link for the git repo. Thanks! -Steve

Edit: There is some documentation for it so you know how to use it lol

Issues on automating MSD365 with selenium

I tried to automate Dynamics365 using Selenium, but facing a lot of issues like

  1. Used 'Id' as element locator, but it keeps changing on different instances
  2. Element locator 'Name' is not working in all instances
  3. Xpath came with appending 'id', so it is also dynamic

Due to all these i cant able to run the code in IE,but the code is working in Chrome.

Can someone help with the issue?

Will an element with an href attribute always work when clicked

I am writing a suite of automated UI tests. I have a set of tests that verifies the links in a navbar work correctly, they take an annoying long time because it's loading 2 pages per test and there are many links in the nav bar. I am wondering if it is necessary to actually click the links?

One of the links would look like this, they're all basically the same, all contained inside a list of of <li> elements:

<a href="/projects/7d9162e5-e59c-452e-b9f5-684a2e0f2924/home" data-reactid=".0.2.0.0.0.$0.0">
  <span class="icon icon-home" data-reactid=".0.2.0.0.0.$0.0.0"></span>
  <span class="label" data-reactid=".0.2.0.0.0.$0.0.1">Home</span>
</a>

I could grab the content from the href attribute and request the page programmatically (don't load it in the browser) to assert that the href is correct and this would be significantly faster.

Is there any chance that an element could have an href attribute that points to the page as expected, but for whatever reason clicking on this element could be broken?

Allow Travis emulator Runtime Permissions for testing

My activity displays a map (that uses current location) and I have to implement a test for it.

For that, the ACCESS_FINE_LOCATION permission is required. I tried to use UiAutomator with:

private void allowPermissionsIfNeeded()  {
        if (Build.VERSION.SDK_INT >= 23) {
            UiObject allowPermissions = device.findObject(new UiSelector().text("ALLOW"));
            if (allowPermissions.exists()) {
                try {
                    allowPermissions.click();
                } catch (UiObjectNotFoundException e) {

                }
            }
        }
    }

It works locally but not on Travis CI. When I am running my test which is simply:

onView(withId(R.id.mapView)).check(matches(isDisplayed()));

Travis CI returns a:

android.support.test.espresso.NoMatchingViewException: No views in hierarchy found matching

Lighthouse - robots.txt error Syntax not understood

When I test my website with Lighthouse it gives me that it found 50 errors with robots.txt, and I don't know what are the problems and how to solve them. It shows the content of my index.html file with error for each line. What is the problem? And how to solve it??

org.json.JSONException: Unparsable JSON string

I'm trying to run unit tests but I'm getting it, but to no avail. my test

@RunWith(SpringRunner.class)
@SpringBootTest(value = "SpringBootTest.WebEnvironment.MOCK")
@AutoConfigureMockMvc
@ActiveProfiles("test")
public class BoletoResourceTest {

@Autowired
private MockMvc mockMvc;
@MockBean
private BoletoService boletoService;
@Test
@WithMockUser(username = "sa")
    public void criarNovoBoletoTeste() throws Exception {
    Boleto mockBoleto = new Boleto(1L, "Testando XUXU");


Mockito.when(boletoService.inserir(Mockito.any(Boleto.class))).thenReturn(mockBoleto);
ObjectWriter objectWriter = new ObjectMapper().writer().withDefaultPrettyPrinter();
String mockBoletoJSON = objectWriter.writeValueAsString(mockBoleto);
this.mockMvc.perform(MockMvcRequestBuilders.post("/api/teste")
.with(SecurityMockMvcRequestPostProcessors.csrf())
.contentType(MediaType.APPLICATION_JSON)
.accept(MediaType.APPLICATION_JSON)
.content(mockBoletoJSON))
.andExpect(MockMvcResultMatchers.status().is(201))
.andExpect(MockMvcResultMatchers.content().json(mockBoletoJSON));
 }
}

And the error I'm getting.

org.json.JSONException: Unparsable JSON string:

Im using jdk 11 and gradle 4.10.2

BDD Mockito - alias for verify(...) when using argument captor?

I wrote the test which uses BBDMockito and Argument Captor:

@Test
public void loadNoItemFromRepository_showsMissingItem() {
    //given
    itemDetailPresenter = new ItemDetailPresenter(UNCHECKED_ITEM.getId(), itemsRepository, itemDetailView);
    given(itemDetailView.isActive()).willReturn(true);

    //when
    itemDetailPresenter.load();
    verify(itemsRepository).getItem(eq(UNCHECKED_ITEM.getId()), getItemCallbackArgumentCaptor.capture());
    getItemCallbackArgumentCaptor.getValue().onDataNotAvailable();

    //then
    then(itemDetailView).should().showMissingItem();
}

Verify placed in //when section is confusing because the name suggests it should be placed in the verification section (//then). Is there an alias for verify() so I can use it with argument captor and the name will be more appropriate for //when?

Assertions for REST-ASSURED test

What I'm trying to do is create assertions on REST-ASSURED test.

I investigate that: "pointSum" actual match 0.45 so I tried:

.body("pointSum", hasItems(0.45)) with fail as: Expected: (a collection containing <0.45>) Actual: 0.45
.body("pointSum", is(0.45))       with fail as: Expected: is <0.45> Actual: 0.45
.body("pointSum", equalTo(0.45))  with fail as: Expected: <0.45> Actual: 0.45

And similar I investigate that: "topics.point" actual match [0.45] and I tried:

.body("topics.point", hasItems(0.45)) with fail as: Expected: (a collection containing <0.45>) Actual: [0.45]
.body("topics.point", is(0.45))       with fail as: Expected: is <0.45> Actual: [0.45]
.body("topics.point", equalTo(0.45))  with fail as: Expected: <0.45> Actual: 0.45

My response looks like this:

{
    "dateRange": {
        "begin": "2016-09-01",
        "end": "2016-09-01"
    },
    "pointSum": 0.45,
    "topics": [
        {
            "name": "Test topic",
            "point": 0.45,
            "texts": [
                {
                    "name": "Test text",
                    "point": 0.45,
                    "authors": [
                        {
                            "name": "John",
                            "point": 0.18
                        },
                        {
                            "name": "Thomas",
                            "point": 0.22
                        }
                    ]
                }
            ]
        }
    ]
}

(I'm not sure if you need to know, but there can be more than one topic and more than one text, and more than one user)

Android Studio Installs Apk More Than Once

I developed an app and delete Junit tests implementation from my gradle file to remove the tests package. I'm not sure if was after that, but since, android studio doesn't install one, but four times my app, all are the same.

This is my AndroidManifest.xml

    <?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:tools="http://schemas.android.com/tools"
    package="braziliancard.veraodedescontos_lojista">

    <uses-permission android:name="android.permission.BLUETOOTH" />
    <uses-permission android:name="android.permission.INTERNET" />
    <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
    <uses-permission android:name="android.permission.CAMERA" />

    <application
        android:icon="@mipmap/ic_launcher"
        android:label="@string/app_name"
        android:roundIcon="@mipmap/ic_launcher"
        android:theme="@style/AppTheme"
        android:allowBackup="true"
        android:debuggable="true"
        tools:ignore="HardcodedDebugMode">

        <activity android:name=".SplashScreenActivity"
            android:theme="@style/AppTheme.NoActionBar">
            <intent-filter>
                <action android:name="android.intent.action.MAIN" />
                <category android:name="android.intent.category.LAUNCHER"/>

            </intent-filter>
        </activity>

        <activity android:name=".MainActivity" android:theme="@style/AppTheme.NoActionBar">
            <intent-filter>
                <action android:name="android.intent.action.MAIN" />

                <category android:name="android.intent.category.LAUNCHER" />
            </intent-filter>
        </activity>

        <activity android:name=".LoginLojista" android:theme="@style/AppTheme.NoActionBar">
            <intent-filter>
                <action android:name="android.intent.action.MAIN" />

                <category android:name="android.intent.category.LAUNCHER" />
            </intent-filter>
        </activity>

        <activity android:name=".utils.ToolbarCaptureActivity" android:theme="@style/AppTheme.NoActionBar">
            <intent-filter>
                <action android:name="android.intent.action.MAIN" />
                <category android:name="android.intent.category.LAUNCHER" />
            </intent-filter>
        </activity>


        <meta-data
            android:name="preloaded_fonts"
            android:resource="@array/preloaded_fonts" />
    </application>

</manifest>

This is a screenshot of what is happening:

enter image description here

this is my gradle.file (app) file:

 buildscript {
    repositories {
        google()
        maven {
            url "https://maven.google.com"
        }
        jcenter()
    }

    dependencies {
        classpath 'me.tatarka:gradle-retrolambda:3.4.0'
    }
}

apply plugin: 'com.android.application'

android {
    //compileSdkVersion project.androidTargetSdk
    compileSdkVersion 26
    buildToolsVersion '28.0.3'
    defaultConfig {
        minSdkVersion 21
        targetSdkVersion 26
        versionCode 1
        versionName "1.0"
    }

    def validConfig
    def keystoreFile
    def keystorePassword
    def keystoreAlias

    try {
        Properties properties = new Properties()
        properties.load(project.rootProject.file('local.properties').newDataInputStream())
        keystoreFile = properties.getProperty('keystore.file')
        keystorePassword = properties.getProperty('keystore.password')
        keystoreAlias = properties.getProperty('keystore.alias')
        validConfig = keystoreFile != null && keystorePassword != null && keystoreAlias != null;
    } catch (error) {
        validConfig = false
    }

    if (validConfig) {
        System.out.println("Release signing configured with " + keystoreFile)
        signingConfigs {
            release {
                storeFile project.rootProject.file(keystoreFile)
                storePassword keystorePassword
                keyAlias keystoreAlias
                keyPassword keystorePassword
            }
        }
    } else {
        System.out.println("Specify keystore.file, keystore.alias and keystore.password in local.properties to enable release signing.")
    }

    buildTypes {
        release {
            if (validConfig) {
                signingConfig signingConfigs.release
            }
            debug {
                debuggable true
            }
            minifyEnabled false
            proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.txt'
        }
    }
    compileOptions {
        targetCompatibility 1.8
        sourceCompatibility 1.8
    }
    }


    dependencies {
        implementation fileTree(include: ['*.jar'], dir: 'libs')
        implementation 'com.google.zxing:core:3.3.0'
        implementation 'com.android.support.constraint:constraint-layout:1.0.2'
        implementation 'com.android.support:design:26'
        implementation 'com.android.support:preference-v14:26+'
        implementation 'com.android.support:support-v13:26+'
        implementation 'com.burgstaller:okhttp-digest:1.17'
        implementation 'net.sourceforge.jtds:jtds:1.3.1'
        implementation 'com.sunmi:sunmiui:latest.release'
        //    implementation 'javax.mail:1.4.7'
        implementation 'com.android.support:appcompat-v7:26+'
        implementation 'com.android.support:design:26+'
        implementation 'com.android.support:recyclerview-v7:26+'
        implementation 'com.sun.mail:android-mail:1.6.2'
        implementation 'com.sun.mail:android-activation:1.6.2'
        //    debugImplementation 'com.squareup.leakcanary:leakcanary-android:1.5'
        //    releaseImplementation 'com.squareup.leakcanary:leakcanary-android-no-op:1.5'
        implementation files('/lib/commons-email-1.5.jar')
        implementation project(':zxing-android-embedded')
        implementation files('/lib/additionnal.jar')
    }

I copied the java folder, the layouts, and the AndroidManifest into another app who was working fine, but the android studio still install more than one time the app.

i guess is something about android manifest or gradle, but i have no idea of what. some tips?

How to resume Django test suite from the last error/failure?

I have a large test suite (1000+ tests), running for a total of 3-4 minutes. When a test fails and I fix it, I'd like to start from that test and continue running the other tests. (I know that a change may affect the previous tests, but running them every time seems wasteful)

Looking at python manage.py test --help, I see no option to run the test suite starting from the last failure. How can it be done?

jest typeError in pure javascript nodejs

I cannot make it work without an error and am requesting help. The issue is the following: Whenever I start jest with --watchAll it causes this error:

TypeError: Cannot read property 'Object.<anonymous>' of null
        at Runtime._execModule (/home/timmy/react/yogist/yogist_chatbot_server/node_modules/jest-runtime/build/index.js:447:72)
        at Runtime.requireModule (/home/timmy/react/yogist/yogist_chatbot_server/node_modules/jest-runtime/build/index.js:265:14)
        at Runtime.requireModuleOrMock (/home/timmy/react/yogist/yogist_chatbot_server/node_modules/jest-runtime/build/index.js:337:19)
        at Object.factory (/home/timmy/react/yogist/yogist_chatbot_server/node_modules/mathjs/lib/function/arithmetic/divide.js:6:27)
        at load (/home/timmy/react/yogist/yogist_chatbot_server/node_modules/mathjs/lib/core/core.js:106:28)
        at resolver (/home/timmy/react/yogist/yogist_chatbot_server/node_modules/mathjs/lib/core/function/import.js:220:24)
        at Object.get [as divide] (/home/timmy/react/yogist/yogist_chatbot_server/node_modules/mathjs/lib/utils/object.js:210:20)
        at getScoreMatrix (/home/timmy/react/yogist/yogist_chatbot_server/controllers/recommendation_engine.js:38:29)
        at /home/timmy/react/yogist/yogist_chatbot_server/controllers/chatbot.js:110:61
        at <anonymous>

I found out that it disappears when I either: Remove my

matrix = mathjs.divide(matrix, mathjs.norm(matrix, "fro"));

Line out of my code.

OR

add a setTimeout with roughly 200ms delay in the jest test

OR

run jest with out watchALL or watch flag.

it also appears when I run npx jest. (so without watch flag)

It only appears when testing with jest, code works seamlessly (it also works with this error, test succeeds).

This seems to be relevant: 5205 but I could not understand the solution properly. Can somebody explain again please?

here is the jest test code:

import request from "supertest";
import Server from "../index";
import path from "path";
import _ from "lodash";

const chai = require("chai");
const expect = chai.expect;

const API_BASE_PATH = "localhost:3000/";

describe("testing userActions", () => {
  beforeAll(done => {
    setTimeout(() => {
      done();
    }, 1);
  });
  afterAll(() => {
    Server.close();
  });
  describe("GET user/getSubscription", () => {
    it("should return 200 OK.", done => {
      return request(Server)
        .post(API_BASE_PATH + "user/getSubscription")
        .send({
          headers: {
            authorization: ""
          }
        })
        .end((err, res) => {
          console.log("this is the error:", res.data);
          if (!err && res.body) {
            console.log("values are:", res.body);
            done("");
          } else {
            done(err);
          }
        });
    });
  });
});

here is the relevant part of my package.json:

...    "dependencies": {
        "babel-cli": "^6.26.0",
        "babel-core": "^6.26.0",
        "babel-jest": "^23.0.6",
        "babel-loader": "^7.1.4",
        "babel-preset-es2015": "^6.24.1",
        "bcrypt-nodejs": "^0.0.3",
        "body-parser": "^1.18.2",
        "config": "^1.30.0",
        "cors": "^2.8.4",
        "express": "^4.16.3",
        "fs": "^0.0.1-security",
        "hash.js": "^1.1.5",
        "jquery-csv": "^0.8.9",
        "jwt-simple": "^0.5.1",
        "kuker-emitters": "^6.7.4",
        "lodash": "^4.17.10",
        "mathjs": "^5.2.3",
        "moment": "^2.22.1",
        "mongoose": "^5.0.10",
        "mongoose-moment": "^0.1.3",
        "mongoose-type-url": "^1.0.2",
        "morgan": "^1.9.0",
        "node-schedule": "^1.3.0",
        "nodemailer": "^4.6.4",
        "nodemon": "^1.17.2",
        "npx": "^10.2.0",
        "passport": "^0.4.0",
        "passport-jwt": "^4.0.0",
        "passport-local": "^1.0.0",
        "path": "^0.12.7",
        "random-token": "^0.0.8",
        "require": "^2.4.20",
        "socket.io": "^2.0.4",
        "socket.io-redis": "^5.2.0",
        "stent": "^5.1.0",
        "stripe": "^6.1.0",
        "uuid": "^3.2.1",
        "winston": "^3.0.0"
      },
      "devDependencies": {
        "chai": "^4.1.2",
        "jest": "^23.6.0",
        "jest-cli": "^22.0.4",
        "supertest": "^3.0.0",
        "webpack": "^4.23.1",
        "webpack-cli": "^3.1.2",
        "webpack-node-externals": "^1.7.2"
      }, ...

Unique Values and thier count in for each column in a pandas dataframe

I have built this test to explore issues in a dataset. Now I want to expand it in order to have the counting of the most and least frequent values (maximum 10 Values but would be nice if this is adjustable)

lvl1 = ['A','A','A','A','A','B','B','B','B',np.nan ]
lvl2 = ['foo','foo','bar','bar','bar','foo','foo','foo','bar','bar']
lvl3=  [1,1,1,2,2,3,3,4,5,6]
df = pd.DataFrame({ 'L1' : lvl1, 'L2' : lvl2, 'L3':lvl3})


df.apply(lambda x: [ 100*(1-x.count()/len(x.index)),x.dtype,x.unique()],result_type='expand').T.rename(index=str, columns={0: "Nullity %", 1: "Type",2:"Unique Values"})

This gives

 Nullity %   Type    Unique Values
L1  10        object  [A, B, nan]
L2  0         object  [foo, bar]
L3  0         int     [1,2,3,4,5,6]

How can I expand it to:

   Nullity %   Type    UniuqueValue1 UniuqueValue2 UniuqueValue3 ... UniuqueValue-3  UniuqueValue-2  UniuqueValue-1
L1  10         object  A:5               B:4          nan:1
L2  0          object  foo:5             bar:5
L3  0           int    1:3               2:2           3:2      ...   4:1             5:1               6:1  

How to use @testable import for frameworks built via Carthage

I've already read answers for this question but I'm not satisfied.

Is there any possibility to use @testable for external libraries e.g. Alamofire, RxRealm (reason why I would like to do that is that, some classes are not open and in some cases it's not possible to create mock in unit tests without overriding real implementation).

Should I test methods in Django models?

Is there any reason to test model methods or I should assume that Django works properly and leave them untested?

Here is my model and couple methods from it:

class SlackTeam(models.Model):
    ...

    def get_user(self, user_id):
        return self.users.filter(user_id=user_id).first()

    def deactivate(self):
        self.active = False
        self.initiator.access_token = ''
        self.initiator.save()
        self.initiator = None
        self.deactivated_at = timezone.now()
        self.save()

        if hasattr(self, 'slackbot'):
            self.slackbot.delete()

    def set_initiator(self, user):
        self.initiator = user
        self.save(update_fields=['initiator'])

    @classmethod
    def initialize(cls, team_id, name):
        return cls.objects.update_or_create(
            team_id=team_id, defaults={'name': name})[0]

    @classmethod
    def get_by_team_id(cls, team_id):
        return cls.objects.filter(team_id=team_id).first()

Test Driven Development in Linear Algebra

I want to learn TDD for working in Linear Algebra (or basically with matrices) in scientific programming(quantum chemistry). However since intermediates are not known and the only thing sometimes known are the end results I dont know how to do it.
Can you show me how you would do TDD in the above example?

for i,j = start1:end1, k,l = start2:end2
    A[i,j] = B[i,k] * C[k,l] * D[l,j]
end

with A being the new Matrix calculated in the function.
with B C D being predefined Matrices.
with start1, start2, end1, end2 being predefined integers.

In a usual program I have around 20 of these blocks. So I think I dont know anything about the new A.
Most bugs I have are simple typing errors like doing

B[k,i]

instead of the above.
The only debugging scheme I am able to do if the end result is wrong is to write everything twice and then checking both A and hope I have not done the same typing error twice.

THX for your help

mardi 30 octobre 2018

Is there any hosted Test Management tool for free use without memory limitaion

Is there any hosted Test Management tool for free use without memory limitation. I found " Tematoo " and it is limited to 0.5 GB . I want more than that memory.

How do we mock out dependencies with Jest _per test_?

(Here's the full minimal repro: https://github.com/magicmark/jest_question)

Given the following app:

src/food.js

const Food = {
  carbs: "rice",
  veg: "green beans",
  type: "dinner"
};

export default Food;

src/food.js

import Food from "./food";

function formatMeal() {
  const { carbs, veg, type } = Food;

  if (type === "dinner") {
    return `Good evening. Dinner is ${veg} and ${carbs}. Yum!`;
  } else if (type === "breakfast") {
    return `Good morning. Breakfast is ${veg} and ${carbs}. Yum!`;
  } else {
    return "No soup for you!";
  }
}

export default function getMeal() {
  const meal = formatMeal();

  return meal;
}

I have the following test:

__tests__/meal_test.js

import getMeal from "../src/meal";

describe("meal tests", () => {
  beforeEach(() => {
    jest.resetModules();
  });

  it("should print dinner", () => {
    expect(getMeal()).toBe(
      "Good evening. Dinner is green beans and rice. Yum!"
    );
  });

  it("should print breakfast (mocked)", () => {
    jest.doMock("../src/food", () => ({
      type: "breakfast",
      veg: "avocado",
      carbs: "toast"
    }));

    // prints out the newly mocked food!
    console.log(require("../src/food"));

    // ...but we didn't mock it in time, so this fails!
    expect(getMeal()).toBe("Good morning. Dinner is avocado and toast. Yum!");
  });
});

How do I correctly mock out Food per test? I don't want to override Food for every test case, just for a single test.

I would also like to not change the application source code ideally (although maybe having Food be a function that returns an object instead would be acceptable - still can't get that to work either tho.)

Things I've tried already:

  • thread the Food object around through getMeal + use dependency injection into formatMeal
    • (the whole point of this approach IRL is that we don't want to thread Food around the whole app)
  • manual mock + jest.mock() - it's possible the answer is here somewhere, but it's tough to control the value here and reset it per test due to import time weirdness

Thanks!

Prop function in componentDidUpdate not being called in mounted enzyme test

I have a class-based component that has the following method:

componentDidUpdate(prevProps) {
  if (prevProps.location.pathname !== this.props.location.pathname) {
    this.props.onDelete();
  }
}

I have the following test that is failing:

  it(`should call the 'onDelete' function when 'location.pathName' prop changes`, () => {
    const wrapper = mount(<AlertsList.WrappedComponent {...props} />);

    // test that the deleteFunction wasn't called yet
    expect(deleteFunction).not.toHaveBeenCalled();

    // now update the prop
    wrapper.setProps({ location: { ...props.location, pathName: "/otherpath" } });

    // now check that the deleteFunction was called
    expect(deleteFunction).toHaveBeenCalled();
  });

where props is initialized in a beforeEach statement like so:

  beforeEach(() => {
    props = {
      ...
      location: { pathName: "/" }
    };
  });

But my test fails in the second case after the setProps is called, where I would expect the lifecycle method to have run. What am I doing wrong here?

Finding an Element by ID Selenium Java TestNG

I'm using Selenium and TestNG for the first time and I've been trying to search an element by its ID but I keep getting an "Cannot instantiate class" error. This is my code:

import org.testng.annotations.*;
import org.openqa.selenium.*;
import org.openqa.selenium.firefox.*;

public class NewTesting {

    WebDriver driver = new FirefoxDriver();

    @BeforeTest
    public void setUp() {
        driver.get("http://book.theautomatedtester.co.uk/chapter1");
    }

    @AfterTest
    public void tearDown() {
        driver.quit();
    }

    @Test
    public void testExample() {
        WebElement element = driver.findElement(By.id("verifybutton"));
    }

}

Maybe I missed installing something? I installed the TestNG plug-in for eclipse and added the WebDriver JAR files, do I need to do more? I tried following multiple tutorials but I keep getting errors, I hope someone can help. Thanks in advance!

How to test picking a picture from gallery in Android?

In a Fragment I have onActivityResult() function which takes for argument amongs others the selected picture from the gallery.

It is called through the startActivityForResult() function which opens the gallery and waits for a result. In my test class how can I pass some picture as argument of onActivityResult() ?

Why this linux command in pyton wont open xterm terminal?

I have the following line of code

print linuxCommand.execute_ssh_command("xterm -e \"cd /home/;./lapras.sh; bash\" &", True, False)

It wont open the graphic terminal,I can do it manually with a script or running another script, also the test is running on a Debian session in a server.

Golang Postgres Travis panic: pq: syntax error at or near "NOT"

I am getting an error I have not seen before when I am trying run my go tests through travis.yml

When running locally all my tests pass, however when run through Travis I get the following error a few times in my tests:

panic: pq: syntax error at or near "NOT"

I thought maybe it was because my pg database utilizes the following packages:

"github.com/jmoiron/sqlx"
"github.com/lib/pq"

But my Travis config should pick up on those imports? My travis.yml looks like the following:

language: go

go:
  - "1.10.4"
  - "1.10.3"
  - "1.10.1"

services:
  - postgresql

addons:
  postgresql: "10"
  apt:
    packages:
    - postgresql-10
    - postgresql-client-10
env:
  global:
  - PGPORT=5432

before_install:
  - sudo apt-get update
  - go get golang.org/x/tools/cmd/goimports
  - sudo /etc/init.d/postgresql stop
  - sudo /etc/init.d/postgresql start

before_script:
  - psql -c 'create database db1;' -U postgres
  - psql -c 'create database db2;' -U postgres

script:
  - go test ./... -parallel 2 -v

What is the point to use an embedded DB for testing when the team does not control the DB schema?

Given that the team inherited a legacy project where does not control the DB schema. Meaning, that anyone in our organisation can push changes to the big corporate DB shared with all other teams. Of course, changes are minimised in order to maintain backwards compatibility. However, I do not see the point to use partial (few tables) embedded or in-memory databases to test the persistence layer (DAOs).

What risk can actually be covered when our SQL scripts are not in synch with the PROD or SIT databases? At least until something is broken and we realise that our SQL scripts are not up to date.

I need another pair of eyes (+brain) to understand this kind of testing strategy.

What kind of testing would you apply to the persistence layer in this tricky situation?

Many thanks in advance,

Using fixtures with LiipFunctionalTestBundle and Postgresql

I'm trying to implement functional tests with Liip's FunctionalTestBundle in a legacy Symfony 2.8 project.

This project uses hand-written postgresql queries in repositories and such, so I cannot force the usage of sqlite. Based on the bundle's documentation, I have to create the schema myself on test setup. I tried using the code snippet provided in the documentation.

public function setUp()
{
    $em = $this->getContainer()->get('doctrine')->getManager();
    if (!isset($metadatas)) {
        $metadatas = $em->getMetadataFactory()->getAllMetadata();
    }

    $schemaTool = new SchemaTool($em);
    $schemaTool->dropDatabase();

    if (!empty($metadatas)) {
        $schemaTool->createSchema($metadatas);
    }

    $this->postFixtureSetup();
}

This fails on $schemaTool->createSchema($metadatas); with the following error:

SQLSTATE[42P06]: Duplicate schema: 7 ERROR:  schema "app" already exists' while executing DDL: CREATE SCHEMA app

/path/to/project/vendor/doctrine/orm/lib/Doctrine/ORM/Tools/ToolsException.php:39
/path/to/project/vendor/doctrine/orm/lib/Doctrine/ORM/Tools/SchemaTool.php:96

My configuration is as follows:

# /path/to/project/app/config/config_test.yml
doctrine:
dbal:
    default_connection: airlinesManager
    connections:
        airlinesManager:
            driver: pdo_pgsql
            memory: true
            path:   "%kernel.cache_dir%/test.db"

Dependencies:

# /path/to/project/composer.json
{
    [...]
    "require": {
        "php": "^7.1",
        "symfony/symfony": "2.8.*",
        "doctrine/orm": "^2.4.8",
        "doctrine/doctrine-bundle": "~1.4",
        "doctrine/common": "2.*",
        "doctrine/dbal": "2.*",
        "doctrine/doctrine-fixtures-bundle": "2.*",
        [...]
    },
    "require-dev": {
        [...],
        "liip/functional-test-bundle": "^1.10"
    },
    [...]
}

Should I change my config to driver: pdo_sqlite and remove the setUp snippet, everything works fine as long as fixtures are concerned. But I start getting errors when tests start using some hand-written postgresql queries.

Any ideas to make the schema thing work?

NB: For some reasons I cannot do any database operation from the symfony console. Here is an example when trying to drop the database.

php app/console doctrine:database:drop --force --env="test"

I always get the same error.

Could not drop database for connection named /path/to/project/app/cache/test/test.db
An exception occurred while executing 'DROP DATABASE /path/to/project/app/cache/test/test.db':

SQLSTATE[42601]: Syntax error: 7 ERROR:  syntax error at or near "/"
LINE 1: DROP DATABASE /path/to/project/...

Then again, this works fine when using sqlite.

Testing DAO and Service layer with in-memory db and mocking

During my learning of testing in Java, I faced a thing which I didn't really get. So, basically, I have unit tests for DAO where I test CRUD operations with the help of in-memory database HSQL.
Well, the next step is to test the Service layer. On the web, they say to use mocks to mock DAO and then test Service.

I see only one way to use mocks: if there are some methods in Service and those methods don't exist in DAO. Then, we don't need to test CRUD operation(DAO itself), therefore we mock DAO and test only Service.

  1. But what If I have the same methods in Service as in DAO?
  2. And what If I just use an in-memory database to test Service as I did with DAO?

I'd like to know what are the benefits to use mocks instead of an in-memory database. Moreover, I guess, it is less readable when use Mocks.

If it makes any sense - I use Spring.

Facing java.lang.ClassNotFoundException: com.intuit.karate.demo.util.SchemaUtils when trying to validate schema

I am trying to check the validate json schema using karate and I am facing this:

java.lang.RuntimeException: javascript evaluation failed: Java.type('com.intuit.karate.demo.util.SchemaUtils'), java.lang.ClassNotFoundException: com.intuit.karate.demo.util.SchemaUtils

My pom dependencies are :

    <dependency>
        <groupId>com.intuit.karate</groupId>
        <artifactId>karate-apache</artifactId>
        <version>${karate.version}</version>
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>com.intuit.karate</groupId>
        <artifactId>karate-junit4</artifactId>
        <version>${karate.version}</version>
        <scope>test</scope>
    </dependency>

    <dependency>
        <groupId>net.masterthought</groupId>
        <artifactId>cucumber-reporting</artifactId>
        <version>3.8.0</version>
        <scope>test</scope>
    </dependency>

    <dependency>
        <groupId>com.github.java-json-tools</groupId>
        <artifactId>json-schema-validator</artifactId>
        <version>2.2.8</version>
    </dependency>

</dependencies>

Feature file syntax and json files locations

Can any one please suggest how to fix this ?

EADDRINUSE :::8080 in jest

I know this error means that port 8080 is already in use but I also tried a different port but I am still getting the same errors for some specific tests only.

This is the code of one of those test

  describe("GET /", () => {
    it("should return all genres", async() => {
      await Genre.collection.insertMany([
        {name: 'genre1'},
        {name: 'genre2'}
      ]);

      const res = await request(server).get('/api/genres');
      expect(res.status).toBe(200);
      expect(res.body.length).toBe(2);
      expect(res.body.some( g => g.name === 'genre1')).toBeTruthy();
      expect(res.body.some( g => g.name === 'genre2')).toBeTruthy();
    });
  });

Any way to solve this error?

How would you unit test this method?

I'm looking to unit test this method but not sure how to go about doing that with a Stream reader and array created in the method. Any ideas?

public static List<Associations> CollectingAssociations(StreamReader sr, string MainUID, string AssocTrainUID, string AssocLoc) {


        string line;
        List<Associations> associationsArray = new List<Associations>();
        //skips the header
        sr.ReadLine();
        //reads top of the file, association records
        while((line = sr.ReadLine()).Substring(0, 2) == "AA") {


            if (line.Substring(3, 6).Contains(MainUID) && 
                line.Substring(9, 6).Contains(AssocTrainUID) && 
                line.Substring(37,7).Contains(AssocLoc)) {

                    associationsArray.Add(new Associations(line));
            }

        }
        return associationsArray;
    }

Can you launch a closed beta before publishing the app?

I want to test my app before fully releasing it but with both the internal testing and the beta testing, Google tells me that I have to release it first. How can I test it a bit before publishing it? Is it possible? Thanks in advance

how to add token authentication to header

I tried to create a client that uses django test client and make a get request with token authentication. But I get a 401 error so something is wrong ..Of course this also happens for post. I was looking for different solutions but nothing works. this is part of the code:

from django.test import Client
  class MyClient:
    def __init__(self):
        self.client = Client()
        self.token = None

    def get(self, url):
        if self.token is not None:
            new_url = base + url
            result = self.client.get(new_url, headers={'Content-Type': 'application/json','Authorization': 'Token {}'.format(self.token)})
        else:
            result = self.client.get(url)
        return result

    def login(self, username, password):
        payload = {
           "username": username,
           "password": password
        }
        response = self.post_without_auth('/api/auth/token_auth/',payload)
        if response.status_code is 200:
           self.token = response.json()['token']
        return response

Generating test cases in Robot framework data-driven approach

I have a question on how to avoid hard-coding test data into Robot framework test cases when using test templates.

I have test cases such as:

Test template     Invalid Login
*** Test Cases ***    LOGIN             PASSWORD
Login admin           admin             ${INVALID_PWD}
Login student         student           ${INVALID_PWD}
Login learner         learner           ${INVALID_PWD}
Login staff           staff             ${INVALID_PWD}

and so on...

I like this approach as long as I don't have 100 or so logins and passwords. Then I'd need to hard-code it here, which seems like a bit too much work to me.

Another what I've tried is:

*** Test Cases ***
Mahara Invalid Login
[Template]    Invalid ${login} with ${password}
admin      aa
student    aa

which makes it a bit simpler, but I don't like it either because it's just one test case with several different steps, each one using a different test data.

What I'd like to have is, say, a list of logins and passwords, or a dict in Python and make Robot framework use these to generate such test cases. However, I have no idea if it's possible.

I've searched a bit and, among other things, found this post: https://stackoverflow.com/a/25206407/10401931 that doesn't look promising.

Then, I've found several ways how to read .csv. I can achieve that in Python, but it doesn't answer my question, how to load what I read in .csv, into this data-driven approach in Python. Basically, what I think it comes down to is how to force test template to loop over a given list/dict given to it. Since Test template is basically a for loop, there might be a way to change this loop a bit. Or isn't there?

Another approach could be to generate the whole .robot test suite as a file in Python. Again, I know how to make this, but it seems like overengineering it a lot, I'd like to find an easier way to do so.

I'd appreciate a little nudge in the right direction.

Thank you

How to create xiomi emulator for android studio? or any other way to test app on xiomi emulators

i want to test my android app for xiomi device how to create emulator for same.i tried andy,geney motion and blustack but no option for xiomi devices.

lundi 29 octobre 2018

Possible to mock module within module using Jest?

I want to mock a dependency (one of the files) that is required from within a module in my node_modules folder using jest.

Is this possible? I can't find anything in the API that would let me do it without stubbing out the entire thing.

MS access and excel job interview test

I have a job interview test coming up for Microsoft access and excel. I have intermediate and basic skills already .. any recommendations on what to focus on for advance level for both excel and access ?

Testing React Component which has material UI using Enzyme and Jest

My component has the following imports from material ui in react 
component

     import Dialog from '@material-ui/core/Dialog';
     import DialogContent from '@material-ui/core/DialogContent';
     import { withStyles } from '@material-ui/core/styles';


While I wrote my testcases with enzyme and jest I could see the 
below error while running the testcase.Is that happens due to the material 
ui dialog and how should I mock if in case it need to be used. 

      Test suite failed to run

      TypeError: (0 , _ponyfill2.default) is not a function

     1 | import './login.css';
     2 | import React, { Component } from 'react';
     > 3 | import Dialog from '@material-ui/core/Dialog';
         | ^
     4 | import DialogContent from '@material-ui/core/DialogContent';
     5 | import { withStyles } from '@material-ui/core/styles';
     6 | import { Redirect } from "react-router-dom";

      at Object.<anonymous> (node_modules/jss/node_modules/symbol- 
      observable/lib/index.js:28:40)
     at Object.<anonymous> (node_modules/jss/lib/utils/isObservable.js:7:25)
     at Object.<anonymous> (node_modules/jss/lib/utils/cloneStyle.js:11:21)
     at Object.<anonymous> (node_modules/jss/lib/utils/createRule.js:16:19)
     at Object.<anonymous> (node_modules/jss/lib/RuleList.js:11:19)  

My package.json: "babel-cli": "6.26.0", "babel-core": "6.26.0", "babel-eslint": "8.2.6", "babel-jest": "^23.4.2", "babel-polyfill": "^6.26.0", "enzyme": "^3.4.4", "enzyme-adapter-react-16": "^1.2.0", "jest": "^23.5.0",

Can anyone let me know why the error happening.

Testing with manual calculation or programmed calculation?

I'm using Django and I need to write a test that requires calculation. Is it best practice to calculate the expected value manually or is it ok to do this by using the sum function (see below)

This example is easier for me because I don't have to calculate something manually:

def test_balance(self):
    amounts = [120.82, 90.23, 89.32, 193.92]
    for amount in amounts:
        self.mockedTransaction(amount=amount)
    total = Balance.total()

    self.assertEqual(total, sum(amounts))

Or in this example I have to calculate the expected value manually:

def test_balance(self):
    self.mockedTransaction(amount=120.82)
    self.mockedTransaction(amount=90.23)
    self.mockedTransaction(amount=89.32)
    self.mockedTransaction(amount=193.92)
    total = Balance.total()

    self.assertEqual(total, 494.29)

Ubuntu 18 Autocomplete Bug? "bash: cd: too many arguments"

Autocomplete started working bad after upgrading from ubuntu 16. If I hit tab after

git checkout src/

I get something like this:

$ git checkout src/bash: cd: too many arguments

main/ test/ 

Coincidentally I happened to see the same using the "test" command of gnu-coreutils:

$ ls
pom.xml  src  target
$ test pom.xml
bash: cd: too many arguments

Are the two things maybe connected? Unfortunately I couldn't find any bug after googling it.

Unit test with resources loaded with ClassLoader and Maven Surefire

I have a JavaFX project that loads the FXML files using getClassLoader().getResource(). The main code runs fine but when I run tests with Maven Surefire Plugin I have this error:

java.lang.IllegalStateException: Location is not set.
javafx.fxml/javafx.fxml.FXMLLoader.loadImpl(FXMLLoader.java:2459)
javafx.fxml/javafx.fxml.FXMLLoader.load(FXMLLoader.java:2435)

Upon further investigation I discovered that when getClassLoader().getResource() is called while executing the test, the path it tries to resolve is in "target/test-classes" folder, whereas the resources reside in "target/classes" folder. How do I solve this problem? My project follows Maven's default structure if that is relevant.

Keep getting an Error when setting up onPrepare in Protractor in configuration file

I am creating my first protractor framework and I am setting up my on prepare in my configuration file.

I keep getting an error X symbol and I can't figure out why.

exports.config = {
seleniumAddress : 'http://localhost:4444/wd/hub',


    specs: ['PageObjectLocator1.js'],

    capabilities: {
        browserName: 'chrome'
      } 

onPrepare =function{

    //place global functions here

}


} 

Here's a screen shot too.

enter image description here

Test method in CBV in Django

I have the following CBV

class Index(TemplateView):
    template_name="index.html"

    def custom_method(self):
        # do some stuff here
        print("Test")

How can I test this method in a TestCase? I've tried the following but it's not working:

def test_custom_method(self):
    response = self.client.get(reverse("index"))
    view_instance = Index.as_view()
    view_instance(response.wsgi_request)
    view_instance.custom_method()

How to test (in depth) if the JDK is correctly installed?

I suspect something is wrong with my system's version of Java. Certain apps are seg faulting or running out of memory or having linking errors. If I had installed the JDK from source, I would just do something like "make test" and see which tests fail. However, it looks like building the JDK from source is not easy. Is there a standard test to find issues with a Java installation? I am looking for something more in-depth than just building a HelloWorld app, which I have done.

I am using OpenJDK 8 on Ubuntu 18.04, installed via apt install openjdk-8.

Android: Why Espresso won't run on compileSdkVersion 27?

I started testing my app and had some errors, but managed to fix them and run the test successfully.

Question is: Why this Espresso test won't run on compileSdkVersion 27 ? I tried "old" dependencies but it fails every time.

Here is build.gradle (Module:app) dependencies:

dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation 'com.android.support:appcompat-v7:28.0.0'
implementation 'com.android.support.constraint:constraint-layout:1.1.3'
implementation 'com.android.support:design:28.0.0'
//testImplementation 'junit:junit:4.12'
//androidTestImplementation 'com.android.support.test:runner:1.0.2'
//androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2'
//androidTestImplementation 'com.android.support.test:rules:1.0.2'

// Room components
implementation "android.arch.persistence.room:runtime:$rootProject.roomVersion"
annotationProcessor "android.arch.persistence.room:compiler:$rootProject.roomVersion"
androidTestImplementation "android.arch.persistence.room:testing:$rootProject.roomVersion"

// Lifecycle components
implementation "android.arch.lifecycle:extensions:$rootProject.archLifecycleVersion"
annotationProcessor "android.arch.lifecycle:compiler:$rootProject.archLifecycleVersion"

// ANDROID X

// Core library
androidTestImplementation 'androidx.test:core:1.0.0'

// AndroidJUnitRunner and JUnit Rules
androidTestImplementation 'androidx.test:runner:1.1.0'
androidTestImplementation 'androidx.test:rules:1.1.0'

// Assertions
androidTestImplementation 'androidx.test.ext:junit:1.0.0'
androidTestImplementation 'androidx.test.ext:truth:1.0.0'
androidTestImplementation 'com.google.truth:truth:0.42'

// Espresso dependencies
androidTestImplementation 'androidx.test.espresso:espresso-core:3.1.0'
androidTestImplementation 'androidx.test.espresso:espresso-contrib:3.1.0'
androidTestImplementation 'androidx.test.espresso:espresso-intents:3.1.0'
androidTestImplementation 'androidx.test.espresso:espresso-accessibility:3.1.0'
androidTestImplementation 'androidx.test.espresso:espresso-web:3.1.0'
androidTestImplementation 'androidx.test.espresso.idling:idling-concurrent:3.1.0'

androidTestImplementation 'androidx.test.espresso:espresso-idling-resource:3.1.0'

androidTestImplementation 'org.hamcrest:hamcrest-library:1.3'
}

Does commented dependencies are the ones I tried using at first, but it fails regardless of "old" or "new" dependencies being used.

Here is the test class:

import org.junit.Rule;
import org.junit.Test;
import org.junit.runner.RunWith;

import androidx.test.ext.junit.runners.AndroidJUnit4;
import androidx.test.rule.ActivityTestRule;

import static androidx.test.espresso.Espresso.onView;
import static androidx.test.espresso.action.ViewActions.click;
import static androidx.test.espresso.action.ViewActions.typeText;
import static androidx.test.espresso.assertion.ViewAssertions.matches;
import static androidx.test.espresso.matcher.ViewMatchers.isDisplayed;
import static androidx.test.espresso.matcher.ViewMatchers.withId;

@RunWith(AndroidJUnit4.class)
public class ExampleInstrumentedTest {

@Rule
public ActivityTestRule<MainActivity> myRule = new ActivityTestRule<>(MainActivity.class);

@Test
public void activityLaunch(){
    onView(withId(R.id.fab)).perform(click());
    onView(withId(R.id.edit_word)).check(matches(isDisplayed()));
    onView(withId(R.id.edit_word)).perform(typeText("Hello"));
    onView(withId(R.id.button_save)).perform(click());
    onView(withId(R.id.fab)).check(matches(isDisplayed()));

}
}

Sinon stub method calling from another file

I'm trying to set up unit tests in my project. For that, I use mocha chai and sinon librairies. I'm using a NodeJs server version 8.

I'd like to stub a method which is declared in another file.

Here's an example :

- a.js
   exports.FnA(return FnB())
- b.js
   exports.FnB()

I want to stub method FnB() from b.js file so I can test FnA() regardless FnB() return.

Here's what I've tried :

beforeEach(() => {
            this.FnBStub = sinon.stub(b, 'FnB').returns('mocked');
        });

afterEach(() => this.FnBStub.restore());

it('should mocked FnB function', () => {
            try {
                console.log(b.FnB()); //returns 'mocked'
                console.log(a.FnA()); //return error from FnB() execution ...
            } catch (e) {
                console.log(e);
            }
        });

It does stub the method FnB() but only when I called it from an instance of b file. So when I called FnA() the stub seems to go away ...

What am I missing ?

Some help would be really appreciated, thanks :)

what is this part called in iPhone screen? want to use it to decide wether screen is home screen

all. I am new in ios auto testing. what is the red circled part called in iPhone screen? I want to use it to decide wether screen is home screen. anyone who has experience in this can also give some advice, is this method a good way to decide home screen? thanks a lot! enter image description here

How to use post method in Rest Assured Frame work

How to use post method in rest assured framework. i am learning API by using Rest Assured Framework can anyone help me how to use.

MY code Is:

    RestAssured.baseURI ="https://reqres.in";
    RequestSpecification request = RestAssured.given();

    JSONObject requestParams = new JSONObject();        
    requestParams.put("email",  "sydney@fife");
    requestParams.put("password", "pistol444");
    request.body(requestParams.toJSONString());
    Response response = request.post("/api/register");

    int statusCode = response.getStatusCode();
    System.out.println("API response code is :" + statusCode);
    String body =response.asString();
    System.out.println("Response body is : "+ body);

Getting error message after consume this Post method:

API response code is :400 Response body is : {"error":"Missing email or username"}

This is reference URL: https://reqres.in/ and post method is : REGISTER - SUCCESSFUL

Please help me .

Thanks

Azure Load Testing Web Test plug-ins, data sources and extraction rules

A customer with which I'm working wants to load test their solutions from different parts of the world an Azure Load Testing would be a perfect fit for their scenarios. The problem is that they are using Web Test plug-ins, data sources, and extraction rules and they cannot use their current web tests since that functionality is not supported, as the following article from the documentation describes: https://docs.microsoft.com/en-us/azure/devops/test/load-test/reference-qa?view=vsts

Could you tell me when is the expected date to have this functionality generally available? Thank you!

How to make my custom webserver more secure?

I wrote a small intranet webserver and i want it to be as secure as possible.
I know it won't be 100% secure but i want to know and fix the most obvious security flaws. An automated tool would also be nice. Any recommendations?

How to execute an external python script without side effects?

I'm trying to build a simple unit testing framework (called 'cutest') in Python3, which provides some commands for testing and then it runs external test scripts.

The first problem is to run the test scripts independently, without accumulating side effects from one test to another as the tests are executed in a loop.

The second problem is to execute functions (e.g., on_setup()) defined in the external test scripts themselves.

Here is skeleton code for the testing framework ("cutest.py"):

def test(title):
    print("test:", title)
    try:
        on_setup() # function from the current test
    except:
        print("on_setup() not found")

def expect(str):
    print("expect:", str)

def main():
    import glob
    tests = glob.glob("test*.py")
    for script in tests:
        print("*******", script)
        try:
            exec(open(script).read(), globals())
        except:
            print("errors in script:", script)

if __name__ == "__main__":
    main()

And here is "test_1.py" script:

def on_setup():
    print("test_1.py:on_setup")

test("My first test")
expect("foo")

And finally, here is "test_2.py" script:

test("My second test")
expect("bar")

Note that the "test_2.py" does NOT define its on_setup() function.

Now, when I run python cutest.py in the directory with test_1.py and test_2.py, I get the following output:

******* test_1.py
test: My first test
test_1.py:on_setup
expect: foo
******* test_2.py
test: My second test
test_1.py:on_setup
expect: bar

The problem is that the output from test_2.py shows "test_1.py:on_setup" (a side effect from running test_1.py, whereas it should show "on_setup() not found", because on_setup() is not defined in test_2.py.

In my skeleton code "cutest.py", I used the Python exec(.., globals()) call, but perhaps the problems can be solved using __import__() or some other mechanism.

What it's like to work an IT job as a software engineer in a company?

Hey i'm about to ask question about job. please anyone who experience job in company please answer. So i'm a college student and taking an IT as major. i have studied there and actually i did an internship in one of the biggest company in my country as programmer. But mostly people there are so unfriendly and they dont even trust an internship guy so they just give random task thats not relatable to the work itself. the company didnt even let all of us (the internship people) to learn what they learn or simply let us know how it works to be programmer. Just doing an internship for 6 months and i feel gradually lost interest in it becuase of it. So there are 2 types of question i wanna ask.

  1. i see many programmers are hired on that company but i cant even understand what are they doing. What i see is that the application is so very much simple (i know because its uploaded in play store). So i guess now they are maintenancing. what are they doing for maintenace so. why many programmers are just hired for 1 simple application. what i think is that after apps are done they just publish it and done. They actually have nothing to do right? even the update is just very simple like adding button in 1 page. So explain me people who are experience in company as an IT, i'm still confused what are they actually doing . please give me the detail of daily working as a software engineer in company.

  2. when i was just doing an internship even though i dont have the chance to learn with them but i see many programmers there are just simply unfriendly (not all of them but most of them) and don't even want to teach new people like me. the structure of learning and working there are just.. i will say bad for such big bank company. But good news is that i've heard in foreign country such as US they treat new people as new people who still needs to learn not as a garbage that's wasting the company's money. so my question is that.. is this true ? how it's like to work in your company?

oh yeah feel free to edit the tags . help me to match with the topic. Thanks

Cloud Android app test with hardware at local premises

I have a android mobile app, it requires BLE communication with my proprietary BLE device.

Now, the problem is, I would like to test my app on different Android real devices in Cloud. But then, it requires BLE hardware (which is in my local premises). Any suggestions on how can we make this happen? Any one already faced this issue before?

dimanche 28 octobre 2018

PHPUnit failed assertion on Symfony PreAuthenticatedToken

I am getting the following error when running a PHPUnit test following an upgrade from Symfony 2.8 to 3.4:

Failed asserting that Symfony\Component\VarDumper\Cloner\Data Object &000000007bbb78b60000000050f520a2 ( matches expected 'Symfony\Component\Security\Core\Authentication\Token\PreAuthenticatedToken'.

This is the assertion that previously worked:

$security = $client->getProfile()->getCollector('security');
$this->assertTrue($security->isAuthenticated());
$this->assertEquals(
    'Symfony\Component\Security\Core\Authentication\Token\PreAuthenticatedToken',
    $security->getTokenClass()
);

The assertEquals assertion is the assertion it is failing on.

This is the full output (probably too big to be dumped into a SO question): https://gist.github.com/crmpicco/a927716570a4949caafec4ca1361bf63

What is causing this to fail?

What audio level should I develop for

I have asked this question on codeproject a while ago, not getting a sensible answer. Let's try a more varied community.

I sometimes have to develop things with audio for desktop applications, and I come across some interesting "bugs" in this. I can't seem to get my audio level at the correct volume.

I do not mean how to edit audio volume in files, nor am I asking a question related to audio in it'self but rather what percentage of audio, given my analog connection to my computer, (no boost/gain added), should I use?

I use 20% as my "standard" level, that's comfortable for Spotify etc.

Is this apropriate, or is there some industry "best practice"-level to develop/test at?

can jest output console log within test block output

So if I put a console.log inside a test the console.log will appear after the tests e.g.

  authentication.spec.js
    register
      ✓ should be able to insert (128ms)
      ✓ should fail because of duplicate (100ms)
      ✓ should have user id assigned (1ms)
    login
      ✓ should be able to login for correct user (117ms)
      ✓ should fail for incorrect user (14ms)

  console.log tests/unit/authentication.spec.js:110
    we can see a message

What I would like to see instead is something like:

  authentication.spec.js
    register
      ✓ should be able to insert (128ms)
      ✓ should fail because of duplicate (100ms)
      ✓ should have user id assigned (1ms)
    login
      ✓ should be able to login for correct user (117ms)
        console.log tests/unit/authentication.spec.js:110
           we can see a message
      ✓ should fail for incorrect user (14ms)

So the console.log should be appearing with ✓ should be able to login for correct user in this case

At what stage of development should we use Formal Verification?

I am newcomer!

My question is simple. What is the sole purpose of formal verification? Normally, we do software testing after implementation phase. At what development phase should we employ formal verification techniques? for example model checking. Should we first formally verify our designs (system designs in finite state machines) and then start coding and later testing? or After implementation we do both verification and testing? I am some confused here. Is formal verification complement testing or vice versa?

Regards Ali

Expected to have a button disables fails

I am trying to perform some user interface end to end tests on an angular project. What I do:

  • Use Gherkin and Cucumber to write my scenarios.
  • I translate them in Protractor for example like this:
 Given('first step', {timeout: 90 * 1000},function(callback) {
     .... some code
     
    });
 
  When('second step',{timeout: 90 * 1000},  function(name, callback) {
       some code 
   
});

The reason I am not specifying what is inside these functions is because everythiung that is there is executed perfectly. So in these functions I perform some steps on my app, like login, put some data and log out, which are run perfectly and I can see them executed in my browser when I run tests.

  • Thirdly I need to write some expectations after performing the steps, so what I want to check is if a button that I have in my app is disabled or enabled after teh test. Here is the HTML code for my button:
<button id="ref_button" type="submit" [disabled]="editForm.form.invalid || isSaving" class="btn btn-primary">

As you can see there is used Angular to disable or enable the button.

Now to write assertions for Cucumber and priotrcator the only library that I could find in cucumber documentation is Chai cucumber page where I see teh recomendadtion Chai library for dom

Now I write this line as an expectation according to chai syntax tocheck teh disability of the button:

 document.getElementById("ref_button").should.have.attr("disabled");

The error that I get this:

  **ReferenceError: document is not defined**
Is there something wrong? I have had plenty of research on this. Has somebody used another better library for assertions in this case? Help please!

Error "Source option 5 is no longer supported. Use 6 or later" on Maven compile

I am getting the following error or $ mvn compile:

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project Sym360: Compilation failure: Compilation failure: 
[ERROR] Source option 5 is no longer supported. Use 6 or later.
[ERROR] Target option 1.5 is no longer supported. Use 1.6 or later.
[ERROR] -> [Help 1]

Here is the code of my pom.xml:

<project xmlns="http://maven.apache.org/POM/4.0.0" 
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
 http://maven.apache.org/maven-v4_0_0.xsd">
 <modelVersion>4.0.0</modelVersion>
 <groupId>com.test.app</groupId>
 <artifactId>Tset</artifactId>
 <packaging>jar</packaging>
 <version>1.0-SNAPSHOT</version>
 <name>Test</name>
 <url>http://maven.apache.org</url>

 <properties>
   <maven.compiler.source>6</maven.compiler.source>
   <maven.compiler.target>1.6</maven.compiler.target>
 </properties>

<build>
<pluginManagement>
  <plugins>
    <plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-surefire-plugin</artifactId>
      <version>2.22.1</version>
    </plugin>
  </plugins>
</pluginManagement>
</build>
<dependencies>

<!-https://mvnrepository.com/artifact/org.seleniumhq.selenium/selenium- 
java -->
<dependency>
    <groupId>org.seleniumhq.selenium</groupId>
    <artifactId>selenium-java</artifactId>
    <version>3.14.0</version>
</dependency>

<!-- https://mvnrepository.com/artifact/org.testng/testng -->
<dependency>
    <groupId>org.testng</groupId>
    <artifactId>testng</artifactId>
    <version>6.14.3</version>
    <scope>test</scope>
</dependency>

I tried to add properties in the pom.xml code, but still getting the same error. Can someone help me in resolving this issue? Thanks in advance

unit testing asynchronous code and code without externally visible effects

Actually I have two questions, although a bit related: - I know that unit tests should test the public api. However if I have a close method that closes sockets and shuts down executors, however neither sockets nor executors are exposed to users of this api, should I test if this is done, or only that the method executed without error? Where is the borderline between public api/behavior and impl details? - if I test a method that performs some checks, then queues a task on executor service, then returns a future to monitor operation progress, should I test both this method and the task, even if the task itself is a private method or othervise not exposed code? Or should I instead test only the public method, but arrange for the task to be executed in the same thread by mocking an executor? In the latter case the fact that task is submitted to an executor using the execute() method would be an implementation detail, but tests would wait for tasks to complete to be able to check if the method along with it's async part works properly.

Ruby Test - Connect four

I am self-learning the Ruby language. Recently, I have came up with an exercise to solve some test related to Connect 4. Unfortunately I am unable to pass the test. Specifically, what the test requires is to:

  it "should set token of player 1 to O" do
                @game.setplayer1
                @game.player1.should eql "O"
            end

And also

it "should get token of player 1" do
                @game.setplayer1
                @game.getplayer1.should eql "O"
            end

Any ideas on how I could pass these tests?

Thank you in advance

Tests using h2 and spring boot

I am developing an application using the Postgresql database, and using schema. In Entity, I use annotation @table (schema = 'schema name'). When I run the tests on h2, I'm getting the error:

Caused by: org.h2.jdbc.JdbcSQLException: Schema "schema name" not found; SQL statement:

I tried to create the file schema.sql, containing:

CREATE SCHEMA IF NOT EXISTS schema name AUTHORIZATION sa;
CREATE SCHEMA IF NOT EXISTS shcema name2... AUTHORIZATION sa;

but without success. Could anyone help?

samedi 27 octobre 2018

Logging in Golang test case

I have this:

package utils_test

import (
    "huru/utils"
    "testing"
)

func TestSetFields(t *testing.T) {

    t.Log("dest.Foo")

    src := struct{}{}
    dest := struct{}{}
    utils.SetFields(&src, &dest)

    t.Log("dest.Foo", dest)
}

and I run this command:

go test -run TestSetFields ./src/huru/utils/utils_test.go

all I see this output:

ok      command-line-arguments  (cached)

I can't get any logging output, I tried log.Println and t.Log, to no avail.

Google Play console reports app crash, I can not reproduce

Google Play Console in Android Vitals reports provides me with this information

java.lang.RuntimeException: 
  at android.app.ActivityThread.performLaunchActivity (ActivityThread.java:2955)
  at android.app.ActivityThread.handleLaunchActivity (ActivityThread.java:3030)
  at android.app.ActivityThread.-wrap11 (Unknown Source)
  at android.app.ActivityThread$H.handleMessage (ActivityThread.java:1696)
  at android.os.Handler.dispatchMessage (Handler.java:105)
  at android.os.Looper.loop (Looper.java:164)
  at android.app.ActivityThread.main (ActivityThread.java:6938)
  at java.lang.reflect.Method.invoke (Native Method)
  at com.android.internal.os.Zygote$MethodAndArgsCaller.run (Zygote.java:327)
  at com.android.internal.os.ZygoteInit.main (ZygoteInit.java:1374)
Caused by: java.lang.NullPointerException: 
  at com.golendukhin.whitenoise.GridAdapter.getCount (GridAdapter.java:52)
  at android.widget.GridView.setAdapter (GridView.java:243)
  at com.golendukhin.whitenoise.FoldersGridActivity.onCreate (FoldersGridActivity.java:60)
  at android.app.Activity.performCreate (Activity.java:7183)
  at android.app.Instrumentation.callActivityOnCreate (Instrumentation.java:1220)
  at android.app.ActivityThread.performLaunchActivity (ActivityThread.java:2908)

As I have understood this happens because getCount method of grid adapter returns null value. Hense, grid adapter does not work. Hense, whole acrivity is not created. Only Samsung devices has such a problem.

First, I've tryed to create same emulated device with same Android version, RAM and ARM. Emulator works absolutely great. No mistakes.

Then I have added some tests to be sure getCount works and activity is created.

@RunWith(AndroidJUnit4.class)
public class GridViewTest {

    @Rule
    public ActivityTestRule<FoldersGridActivity> foldersGridActivityTestRule  = new  ActivityTestRule<>(FoldersGridActivity.class);

    @Test
    public void ensureGridAdapterIsNotNull() {
        FoldersGridActivity foldersGridActivity = foldersGridActivityTestRule.getActivity();
        View view = foldersGridActivity.findViewById(R.id.grid_view);
        GridView gridView = (GridView) view;
        GridAdapter gridAdapter = (GridAdapter) gridView.getAdapter();
        assertThat(gridAdapter, notNullValue());
    }

    @Test
    public void ensureArrayAdapterInGridActivityIsNotEmpty() {
        FoldersGridActivity foldersGridActivity = foldersGridActivityTestRule.getActivity();
        View view = foldersGridActivity.findViewById(R.id.grid_view);
        GridView gridView = (GridView) view;
        GridAdapter gridAdapter = (GridAdapter) gridView.getAdapter();
        assertTrue(gridAdapter.getCount() > 0 );
    }

    @Test
    public void gridViewIsVisibleAndClickable() {
        onData(anything()).inAdapterView(withId(R.id.grid_view)).atPosition(0).perform(click());
        onView(withId(R.id.toolbar_title_text_view)).check(matches(withText("rain")));
    }
}

And again all emulated devices successfully pass tests. Then I have added FireBase TestLab and launched my tests on Galaxy S9 real device with Android 8. All tests are passed.

Code of adapter. Yes, I have overriten getCount() method even though it does not make sence.

public class GridAdapter extends ArrayAdapter<TracksFolder> {
    @BindView(R.id.folder_text_view) TextView textView;
    @BindView(R.id.grid_item_image_view) ImageView imageView;

    private ArrayList<TracksFolder> tracksFolders;
    private Context context;

    GridAdapter(Context context, int resource, ArrayList<TracksFolder> tracksFolders) {
        super(context, resource, tracksFolders);
        this.tracksFolders = tracksFolders;
        this.context = context;
    }

    /**
     * @return grid item, customized via TracksFolder class instance
     */
    @Override
    public View getView(int position, View convertView, ViewGroup parent) {
        if (convertView == null) {
            convertView = LayoutInflater.from(context).inflate(R.layout.grid_item, parent, false);
        }
        ButterKnife.bind(this, convertView);

        String folderName = tracksFolders.get(position).getFolderName();
        int imageResource = tracksFolders.get(position).getImageResource();
        textView.setText(folderName);
        imageView.setImageResource(imageResource);

        return convertView;
    }

    /**
     * Added to possibly avoid NPE on multiple devices
     * Could not reproduce this error
     * @return size of tracksFolders
     */
    @Override
    public int getCount() {
        return tracksFolders.size();
    }
}

So, I have got a couple of question: First, does this crash reports mean that this crash happens every time Sumsung owner launches app? Since some Samsung devices persist in statistic, I assume no. Second, what am I supposed to do to somehow reproduce this mistake and eventually fix the bug.

I understand question is too broad, but I will really appreciate any help, because I absolutely don't know what to do next.

What is the best/complete/correct why of handling infinite loops? (JAVA)

In the code below, the infinite loop occurs at the for loop. Because if mtu = 8. Then (i = i + (mtu - 8)) will always be 0 because the value of i never changes.

I inserted an if statement that says, if "(i = i + (mtu - 8))" does equal 0, then execute some code, else print an error message. I am aware that this if statement has introduced a new bug if "mtu = 10".

What is the best/complete/correct why of breaking infinite loops? Should I be using an if statement? Or is there another way to handle this?

    private static List<String> splitFrame(String message) {
    List<String> slicedMessage = new ArrayList<>();
    int len = message.length();
    if (message.length() > mtu - 8) {
        for (int i = 0; i < len; i += (mtu - 8)) { // Infinite loop occurs here. Because if mtu = 8. Then (i = i + (mtu - 8)) will always be 0.
            if (!((i += (mtu - 8)) == 0)) { // I try to stop the infinite loop by this if statement. If the sum does not equal 0, then exectute some code.
                try {
                    slicedMessage.add(message.substring(i, Math.min(len, i + (mtu - 8))));
                } catch (StringIndexOutOfBoundsException e) {
                    System.out.println("String Index Out Of Bounds --> Method splitFrame"); // TODO Replace sout with terminal.printDiag from the "MessageSender" class.
                }
            } else {
                System.out.println("MTU can not be set to 8"); // TODO Replace sout with terminal.printDiag from the "MessageSender" class.
            }
        }
    } else {
        System.out.println(createFrame(message));
    }
    return slicedMessage;
}

Is it possible to catch an Assertion Error and still fail the test?

I'm trying to Catch an Assertion Error and still Fail the test but when I try the approach below my TestNG test will Pass.

I'm thinking I could use some type of Boolean flag to keep track of the Pass or Fail and then have the Test Listener Class Pass or Fail the test based on the flag but this will cause issue when running in parallel later as the flag will need to be static.

Is there a better way to do this?

        try {
            softAssert.assertAll();
            getLogger().info("Test Passed");
        } catch (AssertionError ae) {
            getLogger().info("Test Failed");
            getLogger().info("Assertion Error: " + ae.getMessage());
            softAssert.fail("Assertion Error: " + ae.getMessage());
        }

Data is autodeleted from in-memory database

I use HSQLDB for testing purpose. The problem is that when I add some data in init() method, then I can get that data only from the test which did run first.

@Before
public void init() {
    if(isRun)
       return;
    isRun = true;

    Role role = new Role();
    role.setId(1);
    role.setType("User");
    roleDAO.save(role);

    User user = new User();
    user.setCredits(1000);
    user.setEmail("User@test.com");
    user.setUsername("User");
    user.setPassword("qwerty");
    user.setRoles(new HashSet<Role>(Arrays.asList(roleDAO.findById(1))));
    userDAO.save(user);

    User user2 = new User();
    user2.setCredits(1000);
    user2.setEmail("User2@test.com");
    user2.setUsername("User2");
    user2.setPassword("qwerty");
    user2.setRoles(new HashSet<Role>(Arrays.asList(roleDAO.findById(1))));
    userDAO.save(user2);
}

@Test
public void findUserByIdTest() {
    User user = userDAO.findByUsername("User");
    assertEquals(userDAO.findById(user.getId()), user);
}

@Test
public void addUserTest() {
    User user = new User();
    user.setCredits(1000);
    user.setEmail("Antony@test.com");
    user.setPassword("qwerty");
    user.setUsername("Antony");
    user.setRoles(new HashSet<Role>(Arrays.asList(roleDAO.findById(1))));
    userDAO.save(user);

    assertEquals(userDAO.findByUsername("Antony"), user);
}

@Test
public void updateUserTest() {
    User user = userDAO.findByUsername("User");
    user.setCredits(0);
    assertEquals(userDAO.findByUsername("User").getCredits(), (Integer) 0);
}

@Test
public void removeUserTest() {
    userDAO.remove(userDAO.findByUsername("User"));

    assertNull(userDAO.findByUsername("User"));
}

So happens that removeUserTest() method always runs first and when I findAll() data then I see the data I set in init() method. After that, others test methods run but if I do findAll() there, it just returns nothing meaning no data exists.
In addition, I have set hibernate.hbm2ddl.auto=create. What am I missing here? Why I can get data in the first running method but in others the data just disappears.

Testing event listener on React component

I'm quite new to React and would like to test simulating a change event on a component. Here is the flow:

  1. Assert component property is as expected
  2. Trigger change event on component (which has a handler attached that updates a component property)
  3. Assert component property has changed to what is expected

This seems pretty basic, but I've tried a few ways to test it - none of which have worked. I'd be really grateful for any advice or pointers on how to approach testing this type of thing in React.

The code works fine but I want to be able to test it. Here's a simplified version of the (in progress) code I'm trying to test:

import React, {Component} from 'react';
import '../search_option/SearchOption';
import SearchOption from "../search_option/SearchOption";

class AccessibleGlobalSearch extends Component {

    constructor(props) {
        super(props);
        this.handle_search_selection = this.handle_search_selection.bind(this);
        this.state = {

            active_search_url: '/site_search/results',

            select_type: 'Select a search type',
            website_search_url: '/site_search/results',
            catalogue_search_url: '/catalogue_search/results/',
            search_options: {
                group_name: 'search_type',
                options: [
                    {
                        label: 'Search Catalogue',
                        id: 'catalogue_search'
                    },
                    {
                        label: 'Search the website',
                        id: 'website_search'
                    }
                ]
            }
        }
    }

    handle_search_selection(e) {

        if(e.target.id === 'catalogue_search') {
            this.setState({active_search_url: this.state.discovery_search_url});
        }

        if(e.target.id === 'website_search') {
            this.setState({active_search_url: this.state.website_search_url});
        }
    }

    render() {
        return (
            <form className="global-search-js" action={this.state.active_search_url} onChange={this.handle_search_selection}>
                <fieldset>
                    <legend>{this.state.select_type}</legend>
                    <SearchOption group_name={this.state.search_options.group_name} options={this.state.search_options.options}/>
                </fieldset>
            </form>
        );
    }
}

export default AccessibleGlobalSearch;

And here is one of the approaches I've been using to test it

import React from 'react';
import ReactDOM from 'react-dom';
import TestRenderer from 'react-test-renderer';
import ReactTestUtils from 'react-dom/test-utils';
import AccessibleGlobalSearch from './AccessibleGlobalSearch';

const test_renderer = TestRenderer.create(<AccessibleGlobalSearch/>);
const test_instance = test_renderer.root;
const instance = test_renderer.getInstance();

it('changes the form action to when that option is selected', () => {

    const rendered_component = test_renderer.toJSON();
    expect(rendered_component.props.action).toBe('/site_search/results');

    const form = test_instance.findByType('form');

    ReactTestUtils.Simulate.change(form, {"target": {"id": "catalogue_search", "checked": true}});

    expect(instance.state.active_search_url).toBe('catalogue_search/results/');

  });

Which tool is best for performance testing on native apps webview pages (adaptive screens)

Suggest me any tool for testing performance on Webviews in native apps (ios & Android)...???

Testing a group of parameters with JUnit in Netbeans

so I can't figured out how to use JUnit when I want to test this:

public int Agregar(int idcat, String nom, String desc)
    {
        int res = 0;
        Categoria nueva = new Categoria(idcat, nom, desc);

        DatosCategorias.add(nueva);
        res = 1;

        return res;
    }

This is what I do, clearly wrong, but I don't know the mistake, or how to formulate an appropiate solution to this.

   public void testAgregar_3args() {
        System.out.println("Agregar");
        int idcat = 1;
        String nom = "as";
        String desc = "asas";
        AuxCategorias instance = new AuxCategorias();
        int expResult = 1;
        int result = instance.Agregar(idcat, nom, desc);
        assertEquals(expResult, result);

I've been looking but there are really easy examples or really difficult. Pretty sure if I understand this, I can do the rest. Can anyone tell me how to do this? TIA

Tox+Python: WinError 5 Access denied (despite running as admin)

I'm getting crazy from this problem.

I'm using Tox for testing my Python package and suddenly I started getting this error:

When I try to recreate my tox environment (tox --recreate), it keeps failing with the "WinError 5: Access is denied"

Could not install packages due to an EnvironmentError: [WinError 5] Access is denied: 'C:\\Users\\NRADOJ~1\\AppData\\Local\\Temp\\pip-unpack-bumke6sa\\scipy-1.1.0-cp36-none-win_amd64.whl' Consider using the--useroption or check the permissions.

All suggestions I found online were "Run the command prompt as admin", but that doesn't work for me...

It seems that the error is thrown when pip attempts to clean up the temp directory from the downloaded .whl file for SciPy.

Any ideas?

Integration Testing w/ Jest

I have an express server written in es6 and I am trying to setup my integration tests using jest.

I looked into setupFiles but it runs for every file which is kind of slow. I looked into setupGlobal but it won't transpile ES6 code.

Is there a canonical solution to this problem?

How to get jest-tests execution results?

I use jest.run() to run my tests using node.js. My tests are completed, but I don't know what is result - error or success. The result variable is undefined here. How to get test execution results?

const { startServer, closeServer } = require('../bin/www');
const jest = require('jest');
const execute = require('./executeCli');

const runAll = async () => {
  await execute('npm run restoreData');
  console.log('Test database is created');
  await startServer();
  console.log('Test server is started');
  const result = await jest.run();
  console.log('Tests are completed');
  console.log('Good job, developer!');
  await closeServer();
  process.exit(0);
};

vendredi 26 octobre 2018

styleName unknown prop when mocking react-css-modules using Jest

I am using jest.mock to mock out react-css-modules for testing my react components.

However, the styleName prop still exists in my components and react complains about Warning: Unknown propstyleNameon <p> tag.

Is there any way to solve this in a testing environment?

Selenium "Unrecognized command: actions"

I am trying to do a mouse click base on position. However, I can't seem to make action work an always get the following message. I reproduced the problem by trying to double click on google.com main search bar.

For help, see: https://nodejs.org/en/docs/inspector (node:38864) UnhandledPromiseRejectionWarning: UnknownCommandError: Unrecognized command: actions warning.js:18 at buildRequest (c:\GitRepo\MMT4\src\javascript\Web.Tests\node_modules\selenium-webdriver\lib\http.js:375:9) at Executor.execute (c:\GitRepo\MMT4\src\javascript\Web.Tests\node_modules\selenium-webdriver\lib\http.js:455:19) at Driver.execute (c:\GitRepo\MMT4\src\javascript\Web.Tests\node_modules\selenium-webdriver\lib\webdriver.js:696:38) at process._tickCallback (internal/process/next_tick.js:68:7) (node:38864) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 3) warning.js:18 (node:38864) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

imported the packages with npm

"devDependencies": {
    "@types/node": "^10.12.0"
},
"dependencies": {
    "chromedriver": "^2.43.0",
    "selenium-webdriver": "^4.0.0-alpha.1"
}

According to the documentation I found here, it should to work https://seleniumhq.github.io/selenium/docs/api/javascript/index.html https://seleniumhq.github.io/selenium/docs/api/javascript/module/selenium-webdriver/lib/input_exports_Actions.html

Also found different example on web which support it should work, but cannot see what is missing in this basic example:

"use strict";
require('chromedriver');
const { Builder, By, Key, until, ActionSequence } = require('selenium-webdriver');
(async function run() {
    let driver = await new Builder().forBrowser('chrome').build();
    try {
        await driver.get('http://www.google.com');

        await driver
            .actions()
            .doubleClick(By.id('lst-ib'))
            .perform();
    }
    finally {
        await driver.quit();
    }
})();

Also tested in a project with protractor and it seem to work, but I don't see why I would need protractor in this project since it does not use Angular.

Thanks

selenium python baseclass setup teardown

Is it possible to create multiple classes : - baseclass(main class) with setup (login) and teardown(logout) - Smoke class with few testcases (methods) - Sanity class with few testcases (methods) (I will not included already smoke cases) Selenium+python+pycharm(IDE)

How to pass chai assertions to component using react

I am using the Chai assertion library in order to run some trivial tests on my code. I have a react component running on the client. It needs to receive assertion statements as props. Right now i am passing Strings like assert.isDefined(myVariable) and just run them using eval(). I am not happy with the state of things. How could i pass an array of assertions using props without using eval?

Thanks for your help.

What's a more Pythonic way to test parts of my code?

I'm on Windows 10, Python 2.7.13 installed via Anaconda. Recently I've been writing a lot of scripts to read/write data from files to other files, move them around, and do some visualizations with matplotlib. My workflow has been having an Anaconda Prompt open next to Sublime Text, and I copy/paste individual lines into my workspace to test something. This doesn't feel like a "best practice", especially because I can't copy/paste multiple lines with indents, so I have to write them out manually twice. I'd really like to find a better way to work on this. What would you recommend changing?

Custom HTML JUnit report using Gradle

I'm trying to setup a test environment for an Android application using AndroidStudio and JUnit/JaCoCo for unit test and code coverage. I already can produce two separate HTML file for both type of tests but, while the one generated by JaCoCo is good for me, the one produced by JUnit is full of verbose content and give poor information about the single test, as you can see in the picture :

enter image description here

There's a way to modify this layout to generate a custom report for JUnit? In my case, for example, I would remove all those lines before the test case results and add more columns to add infos about test cases.

Odoo.sh install fails on website_sale.tests module

I have this Odoo theme with a couple of custom apps made for a company I'm working for, everything works great on my local machine as I expect It, but when I push the code to Odoo.sh, I get the following on the logs and the install fails:

2018-10-26 13:45:38,534 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: FAIL
2018-10-26 13:45:38,537 4 INFO keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ======================================================================
2018-10-26 13:45:38,537 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: FAIL: test_02_admin_checkout (odoo.addons.website_sale.tests.test_sale_process.TestUi)
2018-10-26 13:45:38,539 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: Traceback (most recent call last):
2018-10-26 13:45:38,541 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` File "/home/odoo/src/odoo/odoo/tests/common.py", line 362, in phantom_poll
2018-10-26 13:45:38,543 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` self.fail(pformat(json.loads(line[prefix:])))
2018-10-26 13:45:38,545 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` File "/usr/lib/python3.5/json/__init__.py", line 319, in loads
2018-10-26 13:45:38,546 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` return _default_decoder.decode(s)
2018-10-26 13:45:38,548 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` File "/usr/lib/python3.5/json/decoder.py", line 339, in decode
2018-10-26 13:45:38,550 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` obj, end = self.raw_decode(s, idx=_w(s, 0).end())
2018-10-26 13:45:38,551 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` File "/usr/lib/python3.5/json/decoder.py", line 357, in raw_decode
2018-10-26 13:45:38,553 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` raise JSONDecodeError("Expecting value", s, err.value) from None
2018-10-26 13:45:38,554 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
2018-10-26 13:45:38,556 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: `
2018-10-26 13:45:38,557 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` During handling of the above exception, another exception occurred:
2018-10-26 13:45:38,558 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: `
2018-10-26 13:45:38,560 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` Traceback (most recent call last):
2018-10-26 13:45:38,561 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` File "/home/odoo/src/odoo/addons/website_sale/tests/test_sale_process.py", line 13, in test_02_admin_checkout
2018-10-26 13:45:38,563 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` self.phantom_js("/", "odoo.__DEBUG__.services['web_tour.tour'].run('shop_buy_product')", "odoo.__DEBUG__.services['web_tour.tour'].tours.shop_buy_product.ready", login="admin")
2018-10-26 13:45:38,564 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` File "/home/odoo/src/odoo/odoo/tests/common.py", line 459, in phantom_js
2018-10-26 13:45:38,566 4 ERROR keyydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` self.phantom_run(cmd, timeout)
2018-10-26 13:45:38,567 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` File "/home/odoo/src/odoo/odoo/tests/common.py", line 386, in phantom_run
2018-10-26 13:45:38,568 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` result = self.phantom_poll(phantom, timeout)
2018-10-26 13:45:38,570 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` File "/home/odoo/src/odoo/odoo/tests/common.py", line 364, in phantom_poll
2018-10-26 13:45:38,571 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` self.fail(lline)
2018-10-26 13:45:38,572 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` AssertionError: error tour shop_buy_product failed at step #cart_products tr:contains("16 gb") a.js_add_cart_json:eq(1)
2018-10-26 13:45:38,574 4 INFO keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ======================================================================
2018-10-26 13:45:38,574 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: FAIL: test_03_demo_checkout (odoo.addons.website_sale.tests.test_sale_process.TestUi)
2018-10-26 13:45:38,575 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: Traceback (most recent call last):
2018-10-26 13:45:38,576 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` File "/home/odoo/src/odoo/odoo/tests/common.py", line 362, in phantom_poll
2018-10-26 13:45:38,577 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` self.fail(pformat(json.loads(line[prefix:])))
2018-10-26 13:45:38,579 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` File "/usr/lib/python3.5/json/__init__.py", line 319, in loads
2018-10-26 13:45:38,581 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` return _default_decoder.decode(s)
2018-10-26 13:45:38,582 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` File "/usr/lib/python3.5/json/decoder.py", line 339, in decode
2018-10-26 13:45:38,584 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` obj, end = self.raw_decode(s, idx=_w(s, 0).end())
2018-10-26 13:45:38,585 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` File "/usr/lib/python3.5/json/decoder.py", line 357, in raw_decode
2018-10-26 13:45:38,586 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` raise JSONDecodeError("Expecting value", s, err.value) from None
2018-10-26 13:45:38,588 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
2018-10-26 13:45:38,589 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: `
2018-10-26 13:45:38,591 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` During handling of the above exception, another exception occurred:
2018-10-26 13:45:38,592 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: `
2018-10-26 13:45:38,593 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` Traceback (most recent call last):
2018-10-26 13:45:38,594 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` File "/home/odoo/src/odoo/addons/website_sale/tests/test_sale_process.py", line 16, in test_03_demo_checkout
2018-10-26 13:45:38,595 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` self.phantom_js("/", "odoo.__DEBUG__.services['web_tour.tour'].run('shop_buy_product')", "odoo.__DEBUG__.services['web_tour.tour'].tours.shop_buy_product.ready", login="demo")
2018-10-26 13:45:38,597 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` File "/home/odoo/src/odoo/odoo/tests/common.py", line 459, in phantom_js
2018-10-26 13:45:38,598 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` self.phantom_run(cmd, timeout)
2018-10-26 13:45:38,599 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` File "/home/odoo/src/odoo/odoo/tests/common.py", line 386, in phantom_run
2018-10-26 13:45:38,601 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` result = self.phantom_poll(phantom, timeout)
2018-10-26 13:45:38,602 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` File "/home/odoo/src/odoo/odoo/tests/common.py", line 364, in phantom_poll
2018-10-26 13:45:38,603 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` self.fail(lline)
2018-10-26 13:45:38,605 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: ` AssertionError: error tour shop_buy_product failed at step #cart_products tr:contains("16 gb") a.js_add_cart_json:eq(1)
2018-10-26 13:45:38,606 4 INFO keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: Ran 3 tests in 66.271s
2018-10-26 13:45:38,606 4 ERROR keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: FAILED
2018-10-26 13:45:38,607 4 INFO keydigital-tb-import-test-166502 odoo.addons.website_sale.tests.test_sale_process: (failures=2)
2018-10-26 13:45:38,607 4 INFO keydigital-tb-import-test-166502 odoo.modules.module: odoo.addons.website_sale.tests.test_sale_process tested in 66.34s, 9105 queries
2018-10-26 13:45:38,607 4 ERROR keydigital-tb-import-test-166502 odoo.modules.module: Module website_sale: 2 failures, 0 errors

What I noticed however is that on the development stage I can run the code and everything works like on my local machine, but when I move it to the staging stage, It returns a fresh Odoo installation.

Any help would be appreciated.