jeudi 2 novembre 2017

Is there an equivalent of xfail for coverage?

Occasionally I will handle some condition that I'm fairly certain would be an edge case, but I can't think of an example where it would come up, so I can't come up with a test case for it. I'm wondering if there's a way to add a pragma to my code such that if, in the future some change in the tests accidentally starts covering this line, I'd would be alerted to this fact (since such accidental coverage will produce the needed test case, but possibly as an implementation detail, leaving the coverage of this line fragile). I've come up with a contrived example of this:

In mysquare.py:

def mysquare(x):
    ov = x * x

    if abs(ov) != ov or type(abs(ov)) != type(ov):
        # Always want this to be positive, though why would this ever fail?!
        ov = abs(ov)  # pragma: nocover

    return ov

Then in my test suite I start with:

from hypothesis import given
from hypothesis.strategies import one_of, integers, floats

from mysquare import mysquare

NUMBERS = one_of(integers(), floats()).filter(lambda x: x == x)

@given(NUMBERS)
def test_mysquare(x):
    assert mysquare(x) == abs(x * x)

@given(NUMBERS)
def test_mysquare_positive(x):
    assert mysquare(x) == abs(mysquare(x))

The nocover line is never hit, but that's only because I can't think of a way to reach it! However, at some time in the far future, I decide that mysquare should also support complex numbers, so I change NUMBERS:

NUMBERS = one_of(integers(), floats(),
                 complex_numbers()).filter(lambda x: x == x)

Now I'm suddenly unexpectedly covering the line, but I'm not alerted this fact. Is there something like nocover that works more like pytest.xfail - as a positive assertion that that particular line is covered by no tests? Preferably compatible with pytest.

Aucun commentaire:

Enregistrer un commentaire