Woodscape Fisheye Object Detection Challenge for Autonomous Driving | CVPR 2022 OmniCV Workshop Forum

Go back to competition Back to thread list Post in this thread

> The problems of bounding box labels in training set.

Dear organizer,
First of all, we sincerely thank for your contribution in the field of fisheye detection task, including organize this workshop and provide the public datasets .

We download the offical data in your codalab website https://codalab.lisn.upsaclay.fr/competitions/4074.

We check the training data and found the following two problems.

First, we found several bounding box labels are wrong when we check the training data, such as :
● (511,192,569,254) bounding box label in 00001_FV.png,
● (973,330,1077,434) and (872,304,956,362) bounding box label in 00402_RV.png,
● (957,146,1081,323), (495,3,745,131), (1033,262,1174,447) in 00207_MVL.png;
● and so on;

Second, we also found some object labels that we need to detect are missing, such as the traffic light in 00005_FV and several person in 00000_FV and so on;

The wrong and missing bounding bbox labels may confuse the fisheye detection model during the training stage. Therefore, we want to confirm with you in terms of the wrong and missing bounding bbox labels,
In the ground truth of testing set, is there the same bounding bbox labels distribution (even wrong or missing ones) with the training set ?

Posted by: baiqi @ May 19, 2022, 7:58 a.m.

Hi baiqi,

Thank you for posting this on the forum, we have received your message and we are looking into the problem.
We’ll respond as soon as possible, we appreciate your patience.

Regards,
Saravanabalagi.

Posted by: saravanabalagi @ May 19, 2022, 7:46 p.m.

Dear organizer, how about the bounding box problems we proposed ?

Posted by: baiqi @ May 23, 2022, 7:35 a.m.

Hi baiqi,

Thank you for raising the issue. We are aware of a small number of incorrect bounding box labels do exist in the dataset. Note that the bounding boxes will be published as labels only if they pass the annotation rules (such as amount of occlusion, object size in pixels, etc.) and this can result in false negatives (missing bounding boxes).

The labels for the test and training set have both been produced through the same annotation process and so we believe should have similar distributions.

Regards,
Saravanabalagi.

Posted by: saravanabalagi @ May 26, 2022, 11:12 a.m.
Post in this thread