jeudi 6 octobre 2016

Using @mark.incremental and metafunc.parametrize in a pytest test class

the purpose of @mark.incremental is that if one test fails, the tests afterwards are marked as expected to fail.

However, when I use this in conjuction with parametrization I get undesired behavior.

For example, in the case of this fake code:

conftest.py:
def pytest_generate_tests(metafunc):
    metafunc.parametrize("input", [True, False, None, False, True])

test.py:
@pytest.mark.incremental
class TestClass:
    def test_input(self, input):
        assert input is not None
    def test_correct(self, input):
        assert input==True

I'd expect the test class to run

  • test_input on True,

  • followed by test_correct on True,

  • followed by test_input on False,

  • followed by test_correct on False,

  • folowed by test_input on None,

  • followed by (xfailed) test_correct on None, etc etc.

Instead, what happens is that the test class

  • runs test_input on True,
  • then runs test_input on False,
  • then runs test_input on None,
  • then marks everything from that point onwards as xfailed (including the test_corrects).

What I am assuming is happening is that parametrization takes priority over proceeding through functions in a class. The question is if it is possible to override this behaviour or work around it somehow, as the current situation makes marking a class as incremental completely useless to me.

(is the only way to handle this to copy-paste the code for the class over and over, each time with different parameters? The thought is repulsive to me)

Aucun commentaire:

Enregistrer un commentaire