> Score 0.0 but good local performance.

Hello,
I just uploaded the submission file. Is there something wrong with the format? Can you please let me know?
Thanks

Posted by: niksss @ April 29, 2022, 12:56 p.m.

Hello Niksss,

At the first glance, I do not see any problems with the format of your submission, but I can look into it later and let you know if I find something.
Keep in mind that object detection on artworks with a high number of categories is very challenging.

Some things to consider to check if there is something wrong with your model or the submission format:
1) You can try to do predictions on the training set and check whether you get the mAP higher. If it remains zero, there is probably something wrong with your model.
2) You can split a part of the training set to create your own validation set and use the jupyter notebook in the starting kit (participate -> files) to score the mAP locally. The scoring algorithm in the starting kit is quite similar to the one we use for the leaderboard.

Best,
Mathias

Posted by: mathiaszinnen @ April 29, 2022, 1:44 p.m.

Hi, thanks for your suggestions.
I did both the steps and the mAP is NOT 0 in any of the cases.

After further comparison with the dummy results.json and my submission file, I found that the 'id' starts from 1 in the results.json file whereas, the 'id' for my submission file starts from 0.
Can that be a problem?

Posted by: niksss @ April 29, 2022, 2:05 p.m.

Yes, good catch! Indeed the scoring program assumes the image ids to be starting from one. The starting kit also contains a reference solution where you only have to add the annotations and can leave the image array untouched. See also this thread: https://codalab.lisn.upsaclay.fr/forums/1939/357/.

I guess we should be clearer about the assumed image indexing, I will update the FAQ accordingly. Thanks for pointing this out!

Best,
Mathias

Posted by: mathiaszinnen @ April 29, 2022, 6:45 p.m.

Hi,
I changed the id to start from 1 and the score is still 0.0.
Can you please check if everything's alright with my submission ?

thanks

Posted by: niksss @ April 30, 2022, 4:18 a.m.

Hi,
I have checked your submission and noticed two things:

1) You seem to report the boxes in [x1,y1,x2,y2] format. Our scoring algorithm expects [x,y,width,height] (see https://cocodataset.org/#format-results). Can you try converting the boxes and see if that helps?
2) The ids of the ground truth images array need to match the image ids of the prediction json. Can you try using the dummy submission json that is provided in the starting kit and generate the predictions there (i.e. using the provided images and categories array)?

If this does not help, please let me know. I can then have a look at your submission again and see if I find something else.

Posted by: mathiaszinnen @ April 30, 2022, 11:42 a.m.
Post in this thread