3DOW is an AI based challenge on dataset that is organized by the IROS22, 13th Workshop on Planning, Perception and Navigation for Intelligent Vehicles (IROS22-PPNIV22), and is led by Yancheng Pan and Huijing Zhao, Peking University.
This dataset challenge aims to promote the research of 3D semantic segmentation techniques to solve to the open world problems such as long-tailed and OOD data. In this challenge, there exist objects which do not belong to any known category in the training dataset. The model is required to provide the semantic label of each point for the input data and a confidence score to judge whether the input data is OOD.
Instruction:
You can participate the challenge on the "Participate" page.
As soon as the challenge is completed, the dataset and related code will be made public available.
Please feel free to contact us for any questions and suggestions. E-mail: Yancheng Pan [panyancheng@pku.edu.cn], Huijing Zhao [zhaohj@pku.edu.cn].
The file format you submit should be zip.
The contents in your zip file should be organized as follows:
result.zip
├── predictions
│ ├ 000000.label
│ ├ 000001.label
│ ├ ...
├── confidence
│ ├ 000000.conf
│ ├ 000001.conf
│ ├ ...
The .label file is a bin file which provide the label prediction of each points, the data type is uint32. The .conf file is a bin file which provide the confidence score of each points, the data type is float32.
This is the page that tells you how competition submissions will be evaluated and scored.
For 3D semantic segmentation tasks, we evaluate the outputs by:
For OOD detection, we evaluate the confidence score by:
We also provide AUROC for each predictive classes, which reflects the reliability when model predicts the input data to the specific class. The AUROC will be evaluated among all data predicted into the specific class. However, there are two special case:
1. If none of the ID data is predicted into the specific class, the AUROC will be 0. (The predicted class will always be OOD)
2. If none of the OOD data is predicted into the specific class, the AUROC will be 1. (The predicted class will always be ID)

The range of the confidence is from 0 to 1,and threshold is uniformly sampled at intervals of 0.01. You need to ensure the confidences distribute in [0,1] as uniformly as possible for a more precise evaluation.
In addition, we provide the evaluation program code for better understanding the evaluation methods.
Our dataset is based on the SemanticKITTI and SemanticPOSS dataset, and therefore we distribute the data under Creative Commons Attribution-NonCommercial-ShareAlike licence. You are free to share and adapt the data, but have to give appropriate credit and may not use the work for commercial purposes. Specifically you should cite the following work:
@inproceedings{behley2019arxiv,
author = {J. Behley and M. Garbade and A. Milioto and J. Quenzel
and S. Behnke and C. Stachniss and J. Gall},
title = {{SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences}},
booktitle = {Proc. of the IEEE International Conf. on Computer Vision (ICCV)},
year = {2019}}
@inproceedings{pan2020semanticposs,
author={Pan, Yancheng and Gao, Biao and Mei, Jilin and Geng, Sibo and Li, Chengkun and Zhao, Huijing},
title={SemanticPOSS: A point cloud dataset with large quantity of dynamic instances},
booktitle={2020 IEEE Intelligent Vehicles Symposium (IV)},
pages={687--693},
year={2020}}
@inproceedings{geiger2012cvpr,
author = {A. Geiger and P. Lenz and R. Urtasun},
title = {{Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite}},
booktitle = {Proc.~of the IEEE Conf.~on Computer Vision and Pattern Recognition (CVPR)},
pages = {3354--3361},
year = {2012}}
Start: Aug. 19, 2022, midnight
Oct. 22, 2022, midnight
You must be logged in to participate in competitions.
Sign In| # | Username | Score |
|---|---|---|
| 1 | gebreawe | 0.735 |
| 2 | lidartp | 0.727 |
| 3 | ppq | 0.726 |