Artificial Neural Networks and Deep Learning 2021 - Homework 2 Forum

Go back to competition Back to thread list Post in this thread

> VAL_LOSS don't go down!!!

The issue I'm facing is pretty strange because my best-overfitted model scored quite well on CodaLab (4.21 for a single model), but the validation loss does not go down even when changing train/val split.

I'll explain all my DataLoader and my many models:
- I'm using the class WindowGenerator() that you can find at this link (https://www.tensorflow.org/tutorials/structured_data/time_series).
- I'm training, as with the simplest model I've done, with a Input((96, 7)) -> Conv1D(256, 3) -> BatchNormalization -> ReLU -> Dropout(0.2) -> LSTM(512) -> Dropout(0.4) -> Dense(48 * 7) -> Reshape([48, 7]).
- Now, I know this can't be the best, but I at least expect that train and val losses would go down together... But this does not happen.

Then, I decided to train my super pumped up model up to the 100th epoch, without caring about overfitting (I trained on the whole dataset). I discovered, by plotting the forecast of the model on top of my true data used in training, that the overfit was massive. But not always. In fact, by shifting the window just a little bit backward (together with the true and predicted future values, of course), the future values were predicted out of phase, like the prediction tries to keep always the same phase disregarding the window values.
To investigate the problem, I inspected the DataLoader made by the TensorFlow team, looking for some stride in the slices generation, but it is 1, so that means that the algorithm sees all possible windows in training.

I'm at a loss. What should I do?

Posted by: EugenioBertolini @ Dec. 29, 2021, 5:01 p.m.
Post in this thread