Good evening,
today we made two submissions, which are very different in between them (double-checked, I didnt upload the same zip, the model.py preprocessing is different too) but produce the exact same accuracy score on codalab, down to the last digit.
The same thing apparently happened to other groups. This never happened to us before today.
Is there a solution?
Being a classification problem with a finite number of samples, if the model makes the same number of errors, it always gets the same score. This is probably the problem you are having.
Posted by: an2dl.competitions @ Nov. 25, 2021, 10:31 a.m.