jeudi 8 avril 2021

The model give me a bad accuracy

I am working on a time series project. For this, I developed a CNN+LSTM model to make classification ( CNN for extraction features and LSTM since my data depend on the time). I am using the sliding window approach, and I had split each window into 5 subsequences for the CNN model to process using the TimeDistributed layer and the output of CNN architecture will be fed into LSTM (many to one architecture) to predict the output which corresponds to the window. The shape of my training data is : (119991, 5, 30, 2) and test data shape is : (59991, 5, 30, 2) Now, after training my model I got a training accuracy of 95%, but the test accuracy is 58%. I know that is an overfitting problem and I tried to resolve it using the dropout and early stopping approaches, I also changed the activation function of some layers but I didn't get any improvement Can you tell me how can resolve this problem? This is my code:

model = Sequential()
model.add(TimeDistributed(Conv1D(filters=80, kernel_size=3, activation='relu',strides=1),input_shape=(None,30,2)))
model.add(TimeDistributed(MaxPooling1D(pool_size=2)))
model.add(TimeDistributed(Conv1D(filters=64, kernel_size=2, activation='elu')))
model.add(TimeDistributed(MaxPooling1D(pool_size=2)))
model.add(TimeDistributed(Flatten()))
model.add(LSTM(80))
model.add(Dense(20, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(6, activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy', optimizer=Adam(learning_rate=0.02), metrics=['accuracy'])
mc = ModelCheckpoint('best_model.h5', monitor='val_accuracy', mode='max', save_best_only=True)
model.fit(x_train,y_train,validation_split=0.1,epochs=100,verbose=1,batch_size=10,callbacks=[mc],shuffle=True)
    # evaluate model
saved_model = load_model('best_model.h5')
_, accuracy = saved_model.evaluate(x_test,y_test)

Aucun commentaire:

Enregistrer un commentaire