vendredi 6 novembre 2020

Python test data validation, methodology, framework

I'm looking for best practices, available tools, frameworks etc. for the following task.

We have a HIL test setup, that is running test cases and producing time series data as its output in the form of csv files, per test case. My task is to write validation scripts, that are analyzing this data to determine the result of the test cases. The validation scripts are ran offline.

Basically I have thousands of these folders:

  • Testcase_A_001
    • results.csv
    • log.txt

What I want to do, for each folder (test case):

  • Read results into pandas df
  • Determine what test kind of test case it is (based on the log)
  • Call the appropriate validation script (each testcase type has a different one)
  • Validation script uses assertions to validate results
  • Generate a simple test report After going through all test cases, I need to generate an overview report.

I have no problem with the data manipulation, writing the assertion logic, plotting the signals, etc. What I lack is a test framework that helps with the the assertions and reporting.

I originally wanted to use unittest, but if I understand correctly, it is not possible to just instantiate a a unittest class with my dataframe passed as an argument, run it, and save the reports to a folder.

Is there any package that can help me with this kind of testing? Are there any best practices how this kind of task shall be handled?

Aucun commentaire:

Enregistrer un commentaire