mercredi 5 juillet 2017

Different batch size different results with inception_v2 during testing

I user inception_v2 as a base network for classification. During training, the batchsize=128. During testing, if the batchsize=128, everything is ok. However, if the batchsize is smaller than 128 the results are different. And the precision declines as the batchsize drops. If the batchsize=1, the network will failed. I also used inception_v3 and inception_v1, the same problems appered. However, if the base network is replaced with Alex network (tensorflow), everything goes well. I also replace the inception_v2 with vgg (slim), and everything goes well. I think the bug is associated with inception_v1~v3 or batch normalization. Maybe I have not used inception_v2 properly. Did anyone encounter similar problems?

Aucun commentaire:

Enregistrer un commentaire