Welcome to Track 3 of The RoboDrive Challenge! π
In the rapidly evolving domain of autonomous driving, the accuracy and resilience of perception systems are paramount. Recent advancements, particularly in bird's eye view (BEV) representations and LiDAR sensing technologies, have significantly improved in-vehicle 3D scene perception.
Yet, the robustness of 3D scene perception methods under varied and challenging conditions β integral to ensuring safe operations β has been insufficiently assessed. To fill in the existing gap, we introduce The RoboDrive Challenge, seeking to push the frontiers of robust autonomous driving perception.
RoboDrive is one of the first benchmarks that targeted probing the Out-of-Distribution (OoD) robustness of state-of-the-art autonomous driving perception models, centered around two mainstream topics: common corruptions and sensor failures.
There are eighteen real-world corruption types in total, ranging from three perspectives:
Additionally, we aim to probe the 3D scene perception robustness under camera and LiDAR sensor failures:
For additional implementation details, kindly refer to our RoboBEV, RoboDepth, and Robo3D projects.
The RoboDrive Challenge is affiliated with the 41st IEEE Conference on Robotics and Automation (ICRA 2024).
ICRA is the IEEE Robotics and Automation Society's flagship conference. ICRA 2024 will be held from May 13th to 17th, 2024, in Yokohama, Japan.
The top-performing participants of this competition are honored with cash awards and certificates.
Award | Amount | Honor | ||||||||||
π₯ 1st Place | Cash Award $ 5000 | Official Certificate | ||||||||||
π₯ 2nd Place | Cash Award $ 3000 | Official Certificate | ||||||||||
π₯ 3rd Place | Cash Award $ 2000 | Official Certificate |
Note: The cash awards are donated by our sponsors and are shared among five tracks. We reserve the right to examine the validity of each submission. For more information, kindly refer to the Terms & Conditions page.
Additionally, we provide the following awards for participants that meet certain conditions.
Award | Honor | Condition | ||||||||||
Innovative Award | Official Certificate | Solutions with excellent novelty | ||||||||||
Certificate of Participation | Official Certificate | Teams with submission records in both phases |
Note: The Innovative Award will be selected by the program committee and given to ten awardees; two per track.
Follow us on Twitter for the latest information. Join the Slack and WeChat groups for technical discussions.
Kindly refer to Data Preparation for the details to prepare the training and evaluation data.
The submission guideline for this competition is available on the Submission & Evaluation page.
Please carefully check the Terms & Conditions page for the rules of this competition.
If you have any questions, please contact us by sending an email to robodrive.2024@gmail.com.
If you are interested in other tracks of The RoboDrive Challenge, kindly refer to the following pages:
In this track, the participants are expected to submit their predictions to the CodaLab server for model evaluation. In order to ensure a successful submission and evaluation, you need to follow these instructions:
You will need to register for this track on CodaLab before you can make a submission. To achieve this, apply for a CodaLab account if you do not have one, with your email. Then, go to the server page of this track and press Participate
; you will see a Sign In
button. Click it for registration.
You will need to prepare the model prediction file for submission. Specifically, the evaluation server of this track accepts the .zip
file of your model predictions in .pkl
format. You can follow the example listed on this page.
You will need to submit your pred.zip
file manually to the evaluation server.
To achieve this, go to the server page of this track and press Participate
; you will see a Submit / View Results
button. Click it for submission. You are encouraged to fill in the submission info with your team name, method name, and method description. Then, click the Submit
button and select your pred.zip
file.
After successfully uploading the file, the server will automatically evaluate the performance of your submission and put the results on the leaderboard.
β οΈ Do not close the page when you are uploading the prediction file.
You can view your scores by pressing the Results
button. Following the same configuration with SurroundOcc, we evaluate the model performance with the IoU and mIoU metrics.
This competition is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications, or personal experimentation. Permission is granted to use the data given that you agree:
To ensure a fair comparison among all participants, we require:
If you have any questions or concerns, kindly get in touch with us at robodrive.2024@gmail.com.
Q1: "How can I register a valid team for this competition?" | ||||||
A1: To register a team, kindly fill in this Google Form. The registration period is from now till the deadline of phase one, i.e., Mar 31 '24 . |
||||||
Q2: "Are there any restrictions for the registration? For example, the number of team members." | ||||||
A2: Each team leader should make a valid registration for his/her team. Each participant can only be registered by one team. This is no restriction on the number of team members in a team. | ||||||
Q3: "Whether team members can be changed during the competition?" | ||||||
A3: No. You CANNOT change the list of team members after the registration. You must register again as a new team if you need to add or remove any members of your team. | ||||||
Q4: "How many tracks can I participate in?" | ||||||
A4: Each team can participate in at most two tracks in this competition. | ||||||
Q5: "What can I expect from this competition?" | ||||||
A5: We provide the winning teams from each track with cash awards π° and certificates π₯. The winning solutions will be summarized as a technical report π. An example of last year's technical report can be found here. | ||||||
Q6: "Can I use additional data resources for model training?" | ||||||
A6: No. All participants must follow the SAME data preparation procedures as listed in DATA_PREPARE.md. Additional data sources are NOT allowed in this competition. | ||||||
Q7: "Can I use corruption augmentations during model training?" | ||||||
A7: No. The theme of this competition is to probe the out-of-distribution robustness of autonomous driving perception models. Therefore, all participants must REFRAIN from using any corruption simulations as data augmentations during the model training, including any atomic operation that comprises any one of the corruptions in this competition. | ||||||
Q8: "How should I configurate the model training? Are there any restrictions on model size, image size, loss function, optimizer, number of epochs, and so on?" | ||||||
A8: We provide one baseline model for each track in GET_STARTED.md. The participants are recommended to refer to these baselines as the starting point in configuring the model training. There is no restriction on normal model training configurations, including model size, image size, loss function, optimizer, and number of epochs. | ||||||
Q9: "Can I use LiDAR data for Tracks 1 to 4 ?" |
||||||
A9: No. Tracks 1 to 4 are single-modality tracks that only involve the use of camera data. The goal of these tracks is to probe the robustness of perception models under camera-related corruptions. Participants who are interested in multi-modal robustness (camera + LiDAR) can refer to Track 5 in this competition. |
||||||
Q10: "Is it permissible to use self-supervised model pre-training (such as MoCo and MAE)?" | ||||||
A10: Yes. The use of self-supervised pre-trained models is possible. Such models may include MoCo, MoCo v2, MAE, DINO, and many others. Please make sure to acknowledge (in your code and report) if you use any pre-trained models. | ||||||
Q11: "Can I use large models (such as SAM) to generate pre-training or auxiliary annotations?" | ||||||
A11: No. The use of large foundation models, such as CLIP, SAM, SEEM, and any other similar models, is NOT allowed in this competition. This is to ensure a relatively fair comparing environment among different teams. Any violations of this rule will be regarded as cheating and the results will be canceled. | ||||||
Q12: "Are there any restrictions on the use of pre-trained weights (such as DD3D, ImageNet, COCO, ADE20K, Object365, and so on)?" | ||||||
A12: Following the most recent BEV perception works, it is possible to use pre-trained weights on DD3D, ImageNet, and COCO. The use of weights pre-trained on other datasets is NOT allowed in this competition. | ||||||
Q13: "Can I combine the training and validation sets for model training?" | ||||||
A13: It is strictly NOT allowed to use the validation data for model training. All participants MUST follow the nuScenes official train split during model training and REFRAIN from involving any samples from the validation set. Any violations of this rule will be regarded as cheating and the results will be canceled. |
||||||
Q14: "Can I use model ensembling and test-time augmentation (TTA)?" | ||||||
A14: Like many other academic competitions, it is possible to use model ensembling and test-time augmentation (TTA) to enhance the model when preparing the submissions. The participants SHOULD include necessary details for the use of model ensembling and TTA in their code and reports. | ||||||
Q15: "How many times can I make submissions to the server?" | ||||||
A15: For phase one (Jan. - Mar.), a team can submit up to 3 times per day and 99 times total. For phase two (Apr.), a team can submit up to 2 times per day and 49 times total. One team is affiliated with one CodaLab account only. Please REFRAIN from having multiple accounts for the same team. | ||||||
... | ||||||
... | ||||||
π« Didn't find a related FAQ to your questions? Let us know (robodrive.2024@gmail.com)!
This competition is hosted by the OpenMMLab team from Shanghai AI Laboratory.
This competition is supported by HUAWEI Noah's Ark Lab.
Start: Jan. 15, 2024, midnight
Start: April 1, 2024, midnight
May 1, 2024, 11:59 a.m.
You must be logged in to participate in competitions.
Sign In