I am trying to create a set of automated tests for testing an application, which is mainly non-UI/non-WEB. The tests shall basically start the actual application, feed some data through its interfaces and inspect/verify its output/responses. Ideally, such tests can be run from the build server and also by any developer on their machine.
Although this this obviously not a unit test, one approach frequently stated is to author such tests similar to unit tests in C# (i.e. create a test project with a number of [TestMethod]s, etc.). This works generally fine and I can run the tests and see their result (success/fail). For MSTest, the summary is also available in a trx file - good for further processing the test results, e.g. to stash them away in a documentation system.
However, it would be highly desirable to gather/collect additional files that are being produced while each test is run - for example log files of the application, output files produced by the application, even performance monitor logs files if the test is done with such aspects monitored.
I could potentially write individual code at the end of each test method to gather such evidence, but that would be not very nice, as all/most tests would want to gather the same or similar files. To write generic code (either a utility function or test cleanup code) I would need to at least know what test method is currently being run, which does not seem to be available (other than scanning through call-stacks, etc.).
Does anyone have any ideas how to approach this? It does not seem to be such an uncommon request, that running tests should result in more than just success/fail. I am not bound to Visual Studio's built-in test runner or (ab)using the unit test framework for system/integration tests - although it would be nice if the solution integrates into Visual Studio somehow.
Aucun commentaire:
Enregistrer un commentaire