Woodscape Fisheye Object Detection Challenge for Autonomous Driving | CVPR 2022 OmniCV Workshop Forum

Go back to competition Back to thread list Post in this thread

> The problem of the difference between competition metrics and coco metrics?

When I calculate the results using the coco metric, the score threshold is set to the default of 0.05, but at that point the competition score is very low.
Unless I raise the score threshold above 0.5, the score does not increase.
What is the difference between the contest score calculation and the COCO indicator?

Posted by: heboyong @ April 30, 2022, 12:11 p.m.

Dear heboyong,

A bounding box proposed is considered a true positive if it has an IoU (Intersection over Union) of more than 0.5 (IoU Threshold) with the corresponding ground truth bounding box. The correspondence for a ground truth bounding box is established by picking the bounding box that has the maximum IoU. Once the correspondences for all proposed bounding boxes are established, each one of them is categorized as either TP (True Positive) or FP (False Positive). One to many correspondences are penalized during this process.

Note that with other evaluation techniques (VOC, COCO, etc.), a confidence score might be needed for each bounding box. For this competition, confidence scores are not needed, but instead the predictions should be sorted in descending order using the confidence score as the bounding boxes proposed will be evaluated in the same order they are found in the text files. Also, with other techniques, precision is interpolated at different levels of recall (using 11-point or 101-point) and averaged. For this competition, the area under the precision-recall curve is calculated.

Hope that helps. For more information, you can check the evaluation code given in the evalkit.

Regards,
Saravanabalagi.

Posted by: saravanabalagi @ May 3, 2022, 11:37 p.m.
Post in this thread