mercredi 31 janvier 2018

Neural network accuracy rate flatlines / plateaus (doesn't go up or down)

I have created a neural network using Tensorflow to predict college admission decisions. A snippet of the dataset is as below (I have about 500 samples). GPA/SAT/ACT scores have been normalized while the rest have been hot-encoded. snippet of the dataset. All the numbers have been hot-encoded / normalized

But for some reason, whenever I train my data, the accuracy rate hits a ceiling and just stays at a fixed number.

accuracy rate

Visualization of the accuracy rate

I have tried different epochs, optimizers, samples, learning rates, regularizations, dropouts, but nothing seems to improve my model. I'm not sure if this is a bias problem, a variance problem, or something else. Please help me on why my accuracy is still and isn't moving.

Aucun commentaire:

Enregistrer un commentaire