I have a set of tests for a large computer application that includes: (a) about 50,000 test cases, of which, approximately 99.9% pass, (b) about 150 input variables per test, of which about 85% are real numbers and the other 15% enumerations, chosen from 2 to 20 options, set of options depending on which input variable. Is there an algorithm that can make short work of finding a minimum set of variables and values that are associated with the 0.1% of non-passing tests? By short work, I mean 10 minutes or less on a typical desktop or laptop Windows computer, which I would not expect brute force algorithms to achieve with such a large number of inputs. Would this be a good application for off-the-shelf software that derives decision trees, or would something else be more appropriate?
Aucun commentaire:
Enregistrer un commentaire