I am trying a simple model just with a 0.8 of the dataset in training, and validation goes well, as much accuracy and loss as the training, but when submitting the model in the competition, results become really worse (the model when training has more or less a 80% acuraccy, also in validation) but then decreases to 45% here. I have tried to make a test set from the data given, then making a 70% training, 20% val and 10% test, being test data the model has never seen or used for early stopping or whatever, and results are still good on that dataset, but not when are submitted. Any clue what could be happening? Maybe the competition test set is unbalanced? Does not seem so, and the data is properly split in the code...
Posted by: MigMig @ Nov. 14, 2022, 11:24 a.m.Hello ! I think I have a similar problem, I get results on the hidden test set which are much lower than the ones on my validation set, maybe I am overfitting it in some way, have you found an explanation to your issue ? Thank you !
Posted by: clementine_a @ Nov. 20, 2022, 5:16 p.m.Nope, not yet...I thought maybe it was because i was restoring the best weights for validation in the early stopping phase (even tho this would not explain the good test results) but even that way the hidden data results are still bad. If im able to solve it i'll post it here but does not seem likely, it is a very strange behavior...
Posted by: MigMig @ Nov. 21, 2022, 6:10 p.m.