mercredi 6 janvier 2021

MultiLayer Perceptron testing phase in Deep Learning

I have a problem in executing the test function in my Multi Layer Perceptron with Auto-Regression.

I have the following error:

    ---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-107-fbd55f77ab7c> in <module>()
----> 1 test()

<ipython-input-106-c56ae8527786> in test()
     12       #_, predicted = outputs.max(dim=1) # (4, 10) #why written this way? because we're not interested in the max
     13       total += labels.size(0)
---> 14       correct += predicted.eq(labels).sum().item()
     15 
     16     print('Accuracy of the model on the testing images: %d %%' % (100*correct/total))

RuntimeError: The size of tensor a (128) must match the size of tensor b (120) at non-singleton dimension 0

My test function goes like this:

    def test():
      with torch.no_grad():
        correct = 0
        total = 0
        for images, labels in test_loader:
        outputs = net(images)
        # print(outputs.shape)
        # print(images.shape)
        # print(labels.shape)
        #loss = criterion(outputs, targets)
        _, predicted = torch.max(output.data, 1)
        
        total += labels.size(0)
        correct += predicted.eq(labels).sum().item()

      print('Accuracy of the model on the testing images: %d %%' % (100*correct/total))

I'm not sure how I can correct this one. And do I make sense in my question?


Background for data:

I have an npz dataset which has these dimensions for its dataset (1202, 4768, 50) and gz (1202, 4768).

My test_loader has a batch_size of 128 as per this code I implemented:

testset = data_utils.TensorDataset(X_test, y_test)
test_loader = data_utils.DataLoader(testset, batch_size = 128, shuffle =True, num_workers=4)

Aucun commentaire:

Enregistrer un commentaire