samedi 23 mars 2019

What should we do with batch normalization layer while use transfer learning at test mode?

Recently, I do some test about the behavior of batch normalization layers.I would like to know during transfer learning if I freeze bottom layers,what should I do to batch normalization layers. I actually do some test:

I have two case:

(1) set the layer become not trainable

(2) set the layer trainable,but set the training =False

I see a little difference between these two.Here's the test code:

(1)set the layer become not trainable

    for layer in model.layers:
    layer.trainable=False

I found it actually freeze everything during training,beta,gamma,moving mean and moving variance never get to update.

(2) set the layer trainable,but set the training =False

o =BatchNormalization()(o,training=False)

I found that beta,gamma will be changing and moving mean and moving variance are frozen.

OK,I would like to know if I have transfer learning here,

I want to freeze the bottom layer and fine tune only top layers

Which one should I do?Is(1)or(2)?

Thanks in advance!

Aucun commentaire:

Enregistrer un commentaire