I am currently in the process of putting together a relatively lightweight framework for helping specify expected code coverage and to guarantee that that coverage is met (or explicitly stated as not needed).
However, testing the framework when automatically implemented tests should fail is slightly tricky, because normally, you want passing tests to be a good thing and failing tests to be a bad thing.
An example is:
public interface IDesigned {
void DoFoo();
}
[ClassTestRequirements(Targets.Methods, "IsNotImplemented")]
public interface ITestsOfDesigned : IDesigned
{
}
public class Foo : IDesigned
{
}
[TestClass]
[AdoptsTestRequirements(typeof(ITestsOfDesigned), typeof(IDesigned))]
public class Tester : ProvidesCoverageFor<Foo>
{
/*
* Validation that Tester does what it's supposed to (it currently
* doesn't) is set up in the base class.
*/
}
ProvidesCoverage is an abstract base class that includes a test method that validates using the above that Tester has a method in it decorated with [Covers(nameof(IDesigned.DoFo), "IsNotImplemented")]
that does whatever it does - that's not checked. There's also a mechanism whereby a test tester can be set up to check that all classes decorated with IProvidesCoverage<>
actually validate their tests.
Obviously in the above, the scenario should be that the ProvidesCoverage<>.ValidateTests()
fails. What's more it feels like a valid test to want to perform. But ideally I'd like to configure a scenario where those tests aren't necessarily treated as actual tests (despite being decorated with TestClass
/TestMethod
etc.) but that an external (actual) test runs them and expects a failure.
How can I get that? And if I can't, how should I be testing a project like this, given that, since the intention is to cause certain tests to fail, failure IS a legitimate pass sometimes.
Aucun commentaire:
Enregistrer un commentaire