mercredi 1 novembre 2017

tensorflow batch normalization not working during testing phase

I'm training a GAN and though it seems it is doing a good job during training phase, I'd like to evaluate the accuracy of my discriminator during test phase and I'm having some difficulties to make tf.layers.batch_normalization working in the test mode. :(

This is what I've done:

I've an optimization function:

def optimizer(D_Loss, G_Loss, D_learning_rate, G_learning_rate, extra_update_ops):
    # D and G optimizer
    with tf.control_dependencies(extra_update_ops):
        players_vars = tf.trainable_variables()
        D_vars = [var for var in players_vars if var.name.startswith('Discriminator')]
        G_vars = [var for var in players_vars if var.name.startswith('Generator')]

        D_optimizer = tf.train.AdamOptimizer(D_learning_rate).minimize(D_Loss, var_list = D_vars)
        G_optimizer = tf.train.AdamOptimizer(G_learning_rate).minimize(G_Loss, var_list = G_vars)
        return D_optimizer, G_optimizer

where D_L and G_L are discriminator_loss and generator_loss, respectively.

Then in my train function, i.e. train_GAN(), I run:

extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
D_optimizer, G_optimizer = optimizer(D_L, G_L, G_learning_rate, D_learning_rate, extra_update_ops)

Now, all I get from testing is discriminator_accuracy = 0.113500. I see a steady increase in train_accuracy. It reaches above 0.98. Generated samples are valid too!

Does anyone know what can be the problem? Thanks!

Aucun commentaire:

Enregistrer un commentaire