vendredi 18 août 2017

Best practice for tests that run on multiple input files in Python

Most test frameworks assume that "1 test = 1 Python method/function", and consider a test as passed when the function executes without raising assertions.

I'm testing a compiler-like program (a program that reads *.foo files and process their contents), for which I want to execute the same test on many input (*.foo) files. IOW, my test looks like:

class Test(unittest.TestCase):
    def one_file(self, filename):
        # do the actual test

    def list_testcases(self):
        # essentially os.listdir('tests/') and filter *.foo files.

    def test_all(self):
        for f in self.list_testcases():
            one_file(f)

My current code uses unittest from Python's standard library, i.e. one_file uses self.assert...(...) statements to check whether the test passes.

This works, in the sense that I do get a program which succeeds/fails when my code is OK/buggy, but I'm loosing a lot of the advantages of the testing framework:

  • I don't get relevant reporting like "X failures out of Y tests" nor the list of passed/failed tests. (I'm planning to use such system not only to test my own development but also to grade student's code as a teacher, so reporting is important for me)

  • I don't get test independence. The second test runs on the environment left by the first, and so on. The first failure stops the testsuite: testcases coming after a failure are not ran at all.

  • I get the feeling that I'm abusing my test framework: there's only one test function so automatic test discovery of unittest sounds overkill for example. The same code could (should?) be written in plain Python with a basic assert.

An obvious alternative is to change my code to something like

class Test(unittest.TestCase):
    def one_file(self, filename):
        # do the actual test

def test_file1(self):
    one_file("first-testcase.foo")

def test_file2(self):
    one_file("second-testcase.foo")

Then I get all the advantages of unittest back, but:

  • It's a lot more code to write.

  • It's easy to "forget" a testcase, i.e. create a test file in tests/ and forget to add it to the Python test.

Is there any tool or any trick to easily get the best of both, i.e. automatic testcase discovery (list tests/*.foo files), test independence and proper reporting?

Aucun commentaire:

Enregistrer un commentaire