Hi organizers,
We found that different order of the submission files name can seriously affect the test results on the CodaLab(about 5.0 performance gap). We are wondering if the order of submission files needs to be consistent with the training files, i.e., the model trained on "train_0.csv" must have the submission name "predictions_0.csv", and so on. Sorry for any inconvenience caused.
Hi Malearic, of course the order must match the training files as it is a cross-validation evaluation scheme. If different split IDs are used, the training data will not be correct and the submission will be considered invalid.
Posted by: uparchallenge @ Oct. 17, 2022, 9:11 a.m.Sorry to disturb, I’m confused with my offline validation results which turned out to be inconsistent with the score on CodaLab. Can I ask if a fixed threshold is set or a sliding threshold, and if so what is the threshold?
Sorry for any inconvenience caused.
Hi, we use a fixed threshold of 0.5. Additionally, we calculate the scores for each evaluation domain independently and average them for each split. The final result is then the average over the three splits.
Posted by: uparchallenge @ Oct. 21, 2022, 2:46 p.m.Thanks for your kind reply.
Posted by: melaeric @ Oct. 21, 2022, 2:49 p.m.