2023 IEEE GRSS Data Fusion Contest Track 1 Forum

Go back to competition Back to thread list Post in this thread

> AP50? mAP?

Dear organizers:
If you choose to use AP50 as the final assesment criterion, why is the breakdown of the returned category results only for mAP?


Posted by: miaodq @ Feb. 17, 2023, 4:30 a.m.

Thanks for your advice! The evaluation metric for each category is mAP, which is only intended to provide the participants with analytical material on the classification performance of their model. Sorry we are not able to modify the class-wise evaluation metric to AP_50. This will not have any impact on the overall AP_50 evaluation metric.
Wish you have a good experience.

Posted by: dfc2023.iecas @ Feb. 20, 2023, 5:22 a.m.

If so, why don't use mAP as the evaluate metric?


Posted by: miaodq @ Feb. 20, 2023, 8:37 a.m.

Since a lot of challenges, such as COCO, LVIS, Object365 et.al, choose to use mAP as the final metric to compare different methods, meanwhile, most of papers use mAP as the metric, why not have a try on mAP?

Posted by: miaodq @ Feb. 20, 2023, 8:42 a.m.

@miaodq Thanks for your precious suggestions. However, once the rules are determined, it cannot be easily changed so as to guarantee fairness. Hope you enjoy the competition.

Posted by: kaycharm @ Feb. 24, 2023, 1:18 a.m.
Post in this thread