samedi 25 janvier 2020

Testing Encoder Decoder Model in tensorflow

I've trained a Encoder Decoder model in tensorflow. In the last batches the predictions of the network are close to the target sentences. I saved the model with tf.train.Saver() and restored it in the test method. When I'm running the test method, the predictions of the network are far away from the target sentences, even if the network gets the same input sentences as in the training.

Training Details: epochs: 1 batch_size: 32 hidden_units: 128 training set: about 1M sentences

Here is the Code of the test method:

with tf.Session() as sess:
    loader = tf.train.import_meta_graph("model_encDecAttention.meta")
    loader.restore(sess, './model_encDecAttention')
    graph = tf.get_default_graph()
    input_sentence = graph.get_tensor_by_name("encoder_inputs:0")
    logits = graph.get_tensor_by_name("predictions:0")
    sequence_length = graph.get_tensor_by_name("sequence_length:0")

    for batch, (inputs, targets) in enumerate(get_batches(xTest, yTest, batch_size)):
        print(inputs)
        print(np.shape(inputs))
        print(targets)
        prediction = sess.run(logits, {input_sentence: inputs, sequence_length:[len(inputs[0])] * batch_size})[0]

I don't know if there is a mistake or if I just didn't trained enough. Thank you for helping!

Aucun commentaire:

Enregistrer un commentaire