---+ BRIEF
Q: are there tools that can roll up Perl's TAP (Test Anywhere Protocol) output hierarchically (at least; even better not just a simple hierarchy).
E.g. take something like
ok 1 - [gui][database] fribble cleared
ok 2 - [database] - bar set
not ok 3 - [gui] - widget bobbling
ok 4 - [gui][critical] - finkle barbed correctly
and produce summaries like
Test Failures
By area
[gui] - 2 failures
[gui/critical] 1 failure
[gui/non-crtical] 1 failure
[database] ...
...
By Severity
[critical] - 8 failures
[critical/gui] - 1 failure
[critical/database] - 6 failures
...
Note: I am not just looking for classes and suites of tests and subtests. Although those are good, sometimes I just want to make the output of a lot of tests easier to understand. Possibly also tests of different categories severity are naturally interleaved in different ways.
---+ DETAIL
---++ This is a longstanding question
E.g. 2007 thread at discusses almost exactly this, with reference to a module that seems not to be in CPAN (not now, unsure if ever).
with one smart ass reply
[chromatic]: Filesystem directories already represent a hierarchy. I leave the implementation as a (trivial) exercise for the reader.
and some replies more on target
[Adrian Howard] http://ift.tt/2crPBfG Currently there isn't any in built support for hierarchical test reporting in TAP. The topic's come up before, and a look at the list archives will probably be useful.
You can, of course, use some kind of structured text in the test
description so you have output that looks something like:
ok 1 - Test A/1/1 - fribble cleared
ok 2 - Test A/1/2 - bar set
ok 3 - Test A/1/3 - widget bobbling
ok 4 - Test A/2/1 - finkle barbed correctly
and then parse it out with TAPx::Parser and friends. I believe a guy
from Yahoo did something similar - can't remember if the code ever
escaped though...
and furthermore
[Ovid] http://ift.tt/2d4HvaL I don't recall seeing it on the CPAN, but as I recall, he was doing stuff like this:
ok 1 - [gui][database] fribble cleared
ok 2 - [database] - bar set
ok 3 - [gui] - widget bobbling
ok 4 - [gui][critical] - finkle barbed correctly
And from that he wrote a custom harness with TAPx::Parser which could do stuff like check that all of the [gui] tests passed, all of the [database] tests passed, etc. This allowed him to effective tag and group sets of test results and potentially (my thought, though not mentioned in the discussion) take actions if tests with particular tags fail.
and
[Joe McMahon] http://ift.tt/2crRfho And that's Test::Description::Tagged which I really should oughta
release, along with its maytic report generator.
---++ It's not just about rollup
Even without rollup, it would be nice to just plain make the TAP output more readable, both in and out of context.
http://ift.tt/Z8zbXb 2013
Bad test messages
ok 1 - redirect
ok 2 - transparent gif
ok 3 - cookies ok
ok 4 - log data
Can you tell what that means? Me neither. In fact, this code was a bit confusing and in the process of understanding it, I wound up rewriting the test messages similar to this:
ok 1 - If no cookies are found, we should redirect the user
ok 2 - ... to a transparent gif
ok 3 - ... and the cookie should have a value of 0
ok 4 - ... and the log should have default values and be marked as not valid
Ovid goes on to say that he likes
... if you rewrite that as a subtest, you get this:
ok 1 - If no cookies are found, we should redirect the user
ok 2 - to a transparent gif
ok 3 - and the cookie should have a value of 0
ok 4 - and the log should have default values and be marked as not valid
1..4
In other words, you can skip the grouping by ellipses (...) that I used to always do and group related tests in a subtest with a descriptive name of the test case.
Me, I think I prefer the ellipses...
But then I am ALSO thinking in terms of summarizing the data.
But - whether using ellipses per Ovid, or leading blank space indentation as I was first doing, grepping "not ok" or egrep -v '^ok'
gives you fragmentary messages out of context, like
% egrep -v '^ok' TEST_OUTPUT
not ok 3 - ... and the cookie should have a value of 0
not ok 33 - [gui] - widget bobbling
---++ Should be able to do better
A decent test system might separate short test names from longer test descriptions
% test_filter -failures < TEST_OUTPUT
not ok 3 - "No cookies redirection"/"new cookie 0"
If no cookies are found, we should redirect the user
... and the cookie should have a value of 0
I am used to systems that produce ... ahem, I might as well admit it ... XML output, so that subtrees of the output can be opened or closed.
I.e. rather than the apparently almost completely flat
ok 44 - testing: eval'ing: TEST-1
ok 45 - $eval_error should be false
ok 46 - $eval_caught_error should be undefined
ok 47 - expected value check
DEBUG: 1 args:("a", 1, "b", 2)
DEBUG: 4 args:("a", 1, "b", 2)
# no error
ok 48 - testing: eval'ing: TEST-2
ok 49 - $eval_error should be false
ok 50 - $eval_caught_error should be undefined
not ok 51 - expected value check
# Failed test ' expected value check'
# at ./Test_Key_Value.pl line 70.
# Compared $data->{"a"}
# got : '1'
# expect : '11'
DEBUG: 1 args:("a", 1, "b", 2, "a", 3)
DEBUG: 4 args:("a", 1, "b", 2, "a", 3)
# no error
I often add a few XML tags (actually pseudo-XML, because full on XML can be quite ugly and hard to read, whereas it is pretty easy to just add a few tags
<test TEST-1>
ok 44 - testing: eval'ing: TEST-1
<subtests>
ok 45 - $eval_error should be false
ok 46 - $eval_caught_error should be undefined
ok 47 - expected value check
<diag>
DEBUG: 1 args:("a", 1, "b", 2)
DEBUG: 4 args:("a", 1, "b", 2)
# no error
</diag>
</subtests>
</tests>
<test TEST-2>
<subtests>
ok 48 - testing: eval'ing: TEST-2
ok 49 - $eval_error should be false
ok 50 - $eval_caught_error should be undefined
not ok 51 - expected value check
<failure>
# Failed test ' expected value check'
# at ./Test_Key_Value.pl line 70.
# Compared $data->{"a"}
# got : '1'
# expect : '11'
</failure>
<diag>
DEBUG: 1 args:("a", 1, "b", 2, "a", 3)
DEBUG: 4 args:("a", 1, "b", 2, "a", 3)
# no error
</diag>
</subtests>
</test>
which is pretty easy to indent and,in this case, convert to an org-mode list
+ <test TEST-1>
+ ok 44 - testing: eval'ing: TEST-1
+ <subtests>
+ ok 45 - $eval_error should be false
+ ok 46 - $eval_caught_error should be undefined
+ ok 47 - expected value check
+ <diag>
DEBUG: 1 args:("a", 1, "b", 2)
DEBUG: 4 args:("a", 1, "b", 2)
# no error
</diag>
</subtests>
</test>
+ <test TEST-2>
+ <subtests>
+ ok 48 - testing: eval'ing: TEST-2
+ ok 49 - $eval_error should be false
+ ok 50 - $eval_caught_error should be undefined
+ not ok 51 - expected value check
+ <failure>
+ # Failed test ' expected value check'
# at ./Test_Key_Value.pl line 70.
+ # Compared $data->{"a"}
# got : '1'
# expect : '11'
</failure>
+ <diag>
DEBUG: 1 args:("a", 1, "b", 2, "a", 3)
DEBUG: 4 args:("a", 1, "b", 2, "a", 3)
# no error
</diag>
</sub tests>
</test>
from which you can zoom out and in
+ <test TEST-1> ...
+ <test TEST-2> ...
zooming in
+ <test TEST-1>
+ ok 44 - testing: eval'ing: TEST-1
+ <subtests> ...
</test>
+ <test TEST-2>
+ <subtests>
+ ok 48 - testing: eval'ing: TEST-2
+ ok 49 - $eval_error should be false
+ ok 50 - $eval_caught_error should be undefined
+ not ok 51 - expected value check
+ <failure>
+ # Failed test ' expected value check'
# at ./Test_Key_Value.pl line 70.
+ # Compared $data->{"a"}
# got : '1'
# expect : '11'
</failure>
+ <diag> ...
</sub tests>
</test>
I used org-mode above. Or, you can just look at the XML in your favorite web browser.
---+ CONCLUSION
Since Perl has "tons of fabulous test packages", I would have expected that there was more sophistication. Stuff like
- test names vs test descriptions
- rollup test pass /fail across several different axes
- just plain more readable TAP output
- drilling in / out
Of course, it is always possible, as in so much CPAN, that there are good tools out there, just hard to find amidst all the ...
Aucun commentaire:
Enregistrer un commentaire