Doing software development, you also want to verify the robustness of your code. Especially in image processing - I'm pretty sure that this applies to other fields, too, like bio sciences simulators - your input data can vary a lot.
So far, I've faced the situation that a rolled-out piece of software crashes and causes some irritation at the customer’s site. The framework holding the image processing algorithms is pretty stable, crashes usually occur in the algorithms itself.
Image you use a 3rd-party closed source image processing library. To figure out any problematic code you go manually through the code you wrote. Everything around the blackboxed function seems pretty robust.
Unfortunately, as soon as an image with this very special gradient on this very particular region, the blackboxed function crashes.
Enveloping all 3rd-party functions with try-catches is not taking out all risks. Especially on embedded devices you may just get a segfault.
To avoid unhappy customers und therefore eradicate possible crashes, I started to do white-noise-tests using random-generated patterns as input images and let this tests run for a few days - which actually gave my some confidence (and in some cases, more mistrust) in the robustness on an closed-source function.
Compared to an analytical (or using integration / unit tests) approach is seems like ... steamroller tactics. It is just not very elegant.
Coming to my question: Is this empirical testing approach appropriate? Are there better solutions available?
Aucun commentaire:
Enregistrer un commentaire