ECCV 2022 Workshop SSLAD Track 3 - Corner Case Detection

Organized by SSLAD2022 - Current server time: Nov. 6, 2025, 6:21 p.m. UTC
Reward $10,000

First phase

Validation
Aug. 1, 2022, midnight UTC

End

Competition Ends
Dec. 31, 2023, midnight UTC

Note: The official competition has ended and the awards have been given to the winning teams. You can find their reports here. In case you are still interested in the competition and would like to see how your approach compares with others, we kept the submission portal open and you are welcome to have a try at any time before 31/12/2023 when the portal will be automatically closed. We may postpone this date upon request. Please let us know. Thank you!

Overview

Deep learning has achieved prominent success in detecting common traffic participants (e.g., cars, pedestrians, and cyclists). Such detectors, however, are generally incapable of detecting novel objects that are not seen or rarely seen in the training process. These objects are called (object-level) corner cases, which consist of two categories, namely 1) instance of novel class (e.g., a runaway tire) and 2) novel instance of common class (e.g., an overturned truck). Properly dealing with corner cases has become one of the essential keys to reliable autonomous-driving perception systems. The goal of this competition is to discover novel methods for detecting corner cases among common traffic participants in the real world.

Data Description

Training.  We provide two large-scale real-world autonomous-driving datasets, SODA10M and ONCE, for training. You may choose to use either or both of the datasets. To better facilitate corner case detection, you may also use ImageNet-1k in addition to the provided autonomous-driving datasets for pretraining/training.

SODA10M is a 2D object detection dataset for autonomous driving, which contains 10 million unlabeled images and 20k images fully-annotated with 6 representative categories (pedestrian, cyclist, car, truck, tram, tricycle). For more details about this dataset, please refer to arxiv report and dataset website.

ONCE is a 2D and 3D object detection dataset for autonomous driving, which contains 1 million LiDAR frames, 7 million camera images, and 15k fully-annotated scenes with 5 categories (car, bus, truck, pedestrian, cyclist). For more details about this dataset, please refer to arxiv report and dataset website.

Evaluation.  The evaluation will be conducted on the corner case dataset, CODA2022, which contains 9768 camera images (collected from SODA10M and ONCE) with 80180 annotated objects spanning 43 object categories. The first 7 categories (pedestrian, cyclist, car, truck, tram, tricycle, bus) are common categories, while the rest are novel categories. The common-category objects are fully annotated, while for objects of novel categories, only those that obstruct or has a potential to obstruct the road are annotated. (Note: the tram category in SODA10M contains both trams and buses, but trams are extremely rare in CODA, so may simply treat all buses and trams as a member of the bus category of CODA.)

The validation set and the test set of CODA2022 each contains 4884 images. The validation set covers only 29 of the 43 categories, while the test set covers all of the 43 categories, simulating the real-world scenario where brand-new categories are encountered after deployment. You can download CODA2022 under the "Participate" tab after signing in. For more information on CODA2022, please refer to the base version of this dataset: arxiv report and dataset website.

General Rules

  • To ensure fairness, the top-3 winners are required to write a technical report.
  • Each entry is required to be associated to a team and its affiliation (members of one team should register as one and the affiliation should appear in the team name).
  • Using multiple accounts to increase the number of submissions is strictly prohibited.
  • Only SODA10M, ONCE, and ImageNet-1k are allowed for this competition. Pretraining/training on datasets other than these three is not allowed.
  • Results should follow the correct format and must be uploaded to the evaluation server through the CodaLab competition site. Detailed information about how results will be evaluated is provided on the evaluation page.
  • The best entry of each team is public in the leaderboard at all time.
  • The organizer reserves the absolute right to disqualify entries that are incomplete, illegible, late, or violating the rules.

Awards

Participants with the most successful and innovative entries will be invited to present at this workshop and receive awards. A 5,000 USD cash prize will be awarded to the top team, while the 2nd will be awarded with 3,000 USD, and the 3rd will be awarded with 2,000 USD.

Contact Us

For more information, please contact us at sslad2022@googlegroups.com.

Evaluation Metric

For this task, we define a custom metric which is the sum of the four metrics as follows:

  • AP-common: mAP over objects of common categories (pedestrian, cyclist, car, truck, tram, tricycle, bus);
  • AP-agnostic: mAP over objects of all categories in a class-agnostic manner;
  • AR-agnostic: mAR over objects of all categories in a class-agnostic manner;
  • AR-agnostic-corner: mAR over corner-case objects of all categories in a class-agnostic manner;

where mAP and mAR stand for mean Average Precison and mean Average Recall in COCO API.

Submission Format

The .json results should be pack into a single zip file. Example zip file is available in faster-rcnn-val.zip. For common-category predictions, the "category_id" should match the category ids (from 1 to 7) listed in the CODA2022 validation annotation file. For novel-category predictions, use integer "8" for the "category_id".

 

Terms of Use

The CODA2022 dataset is released for academic research only and it is free to researchers from educational or research institutions for non-commercial purposes. When downloading the dataset you agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purposes, any portion of the images and any portion of derived data. For the full terms of use information, please refer to CODA/term of use website.

Validation

Start: Aug. 1, 2022, midnight

Description: Validation set evaluation. You can submit up to 100 times every day and 1000 times in total throughout the whole competition.

Test

Start: Aug. 1, 2022, midnight

Description: Test set evaluation. You can only submit up to TEN times throughout the whole competition.

Competition Ends

Dec. 31, 2023, midnight

You must be logged in to participate in competitions.

Sign In