The recent Neural Networks (NN) revolution comes, in large part, from hardware efficiency. The theory behind most current NN artifacts (even the most recent Capsule Network) has been around for a long time.
If I am not mistaken, the parallel small computing power units (GPU) are able to tackle problems in a way that a central powerful computing unit (CPU) cannot.
I am curious as to whether one can think of any possible area of theorem proving or software testing or model checking (or more generally any avenue of formal verification and validation) where that sort of distributed small computations can behave better than a central large one.
Specifically, if we are ever to address the problem of verification and validation of large neural networks, does it "light any bulbs" to wonder about how GPU can come in handy where CPU cannot.
Aucun commentaire:
Enregistrer un commentaire