jeudi 26 mars 2015

How to manage the expectations of a test that uses a fixture with parameters.

Pytest gives you the ability to parameterize fixtures:



@pytest.fixture(params = ['a'])
def root(request):
return request.param


so now in a text marked with our fixture "root" we can use our param:



def test_something(root):
assert root == 'a' #this will pass


We can extend our fixture with another fixture:



@pytest.fixture(params = ['a'])
def root(request, leaf):
return request.param + leaf


@pytest.fixture(params = ['b', 'c'])
def root(request):
return request.param


Now that we have done this, its not clear how do manage our tests expectations.


To be clear, but test expectations i mean...



def test_something(root):
assert leaf == expectation??? <---- this has to be hardcoded.


as a toy solution for this problem we could do



def test_something(root):
assert leaf == {"ac": "ac", "ab": "ab"}[leaf]


But that's horrible for a host of obvious reasons.


see, at this point we have created a 'testing tree' with 'a' at the root and 'ab' and 'ac' as the leaves:



'a'
+-----+ +-----+
| |
| |
+> <+

ab ac


and while in this case its easy to keep track of, it needs to be effortless and obvious in order for it to work. As the tree is going to have 2^N leafes (tests).


The idea of a testing is very powerful:



  • It would allow for easily testing all relevant states. Where in a typical testing suite you would need to write all the tests by hand, here they would be generated for you.

  • It would allow for pin pointing where in the tree failures were happening, allowing you to pinpoint the fixture that was possible the root cause of the problem.


So i'm curious if a tool exists, in python or another language that provides this functionality, while allowing us to manage our testing expectations.


What would such a tool look like? Well, my guess is that it would have to provide the user with a way to input expectations to a tests. Something like



> "test_ab_cd)
> inspect info("test_ab_cd")
> ab == ???
> ac == ???
> some testing info
> "ab" === ???
> enter what "ab" should equal?
> some information about the test
> "ac" === ???
> inspect info("test_ab_cd")
> ab == ab
> ac == ???


And additional provide logs and a testing file for easy reading.. something like ...



test_something.txt
=========================
- test_something_ab
assert ab == ab
- test_someting_ac
assert ac == ac


which in a more complex example would need to look something like this (essential what regular tests look like)



test_something.txt
====================
- test_something_graph
x = 1
y = 2
start = node(x, y)
que = SimplePriority()
finish_node == graph(start,que)
assert finish_node == 5


Which would have to be generated via inspection of the various text_fixtures involved.


Its more then likely the solution here is to just have "damp" tests and not deeply nested fixtures. But i'm curious if there is a way to manage the expectations dynamically. Or maybe if there are any examples in other languages of such a testing tool. I searched around but didn't find anything.


Aucun commentaire:

Enregistrer un commentaire