3D Semantic Segmentation to the Open World (3DOW) Challenge

Organized by DuKong - Current server time: Nov. 21, 2025, 6:19 a.m. UTC

Previous

First phase
Aug. 19, 2022, midnight UTC

Current

First phase
Aug. 19, 2022, midnight UTC

End

Competition Ends
Oct. 22, 2022, midnight UTC

3D Semantic Segmentation to the Open World (3DOW) Challenge

3DOW is an AI based challenge on dataset that is organized by the IROS22, 13th Workshop on Planning, Perception and Navigation for Intelligent Vehicles (IROS22-PPNIV22), and is led by Yancheng Pan and Huijing Zhao, Peking University.

This dataset challenge aims to promote the research of 3D semantic segmentation techniques to solve to the open world problems such as long-tailed and OOD data. In this challenge, there exist objects which do not belong to any known category in the training dataset. The model is required to provide the semantic label of each point for the input data and a confidence score to judge whether the input data is OOD.

Instruction:

  • Develop your 3D semantic segmentation models on the training dataset.
  • Use your optimized model to predict results on the test dataset
  • Submit your results on the "Submit/View Results" page. Then your submissions will be evaluated and scored.

You can participate the challenge on the "Participate" page.

 

As soon as the challenge is completed, the dataset and related code will be made public available.

Please feel free to contact us for any questions and suggestions. E-mail: Yancheng Pan [panyancheng@pku.edu.cn], Huijing Zhao [zhaohj@pku.edu.cn].

Submission format

The file format you submit should be zip.

The contents in your zip file should be organized as follows:

 
   result.zip
    ├── predictions
    │     ├ 000000.label
    │     ├ 000001.label
    │     ├ ...
    ├── confidence
    │     ├ 000000.conf
    │     ├ 000001.conf
    │     ├ ...

The .label file is a bin file which provide the label prediction of each points, the data type is uint32. The .conf file is a bin file which provide the confidence score of each points, the data type is float32.

 

Evaluation Criteria

This is the page that tells you how competition submissions will be evaluated and scored.

For 3D semantic segmentation tasks, we evaluate the outputs by:

  • accuracy: The accuracy, which is the proportion of observation classified correctly by your model, will be evaluated considering the ten categories as a whole.
  • IoU: To a certain category, an observation belonging to it can be considerd as positive, while others can be considered as negative. TP(True positive) is the number of positive observations where the model correctly predicts them positive. FP(False positive ) is the number of negative observations where the model incorrectly predicts them positive. FN(False negative) is the number of negative observations where the model correctly predicts them negative.
    IoU(Intersection over Union)=TP/(TP+FP+FN)
    IoU of nine object categories will be evaluated respectively. And mIoU (mean IoU) is the average value of the nine categories' IoU.

 

For OOD detection, we evaluate the confidence score by:

  • AUROC(area under the ROC curve): With a confidence threshold,observations can be classified as either ID (positive) or OOD (negative) by your model. TPR((true positive rate) represents the proportion of observations that are classified to be positive when indeed they are positive. FPR(false positive rate) represents the proportion of observations that are classified to be positive when they're actually negative. A ROC(Receiver Operator Characteristic) curve can be created by plotting pairs of TPR vs. FPR for every possible decision threshold of the model.

We also provide AUROC for each predictive classes, which reflects the reliability when model predicts the input data to the specific class. The AUROC will be evaluated among all data predicted into the specific class. However, there are two special case:

1. If none of the ID data is predicted into the specific class, the AUROC will be 0. (The predicted class will always be OOD)

2. If none of the OOD data is predicted into the specific class, the AUROC will be 1. (The predicted class will always be ID)

 

 

The range of the confidence is from 0 to 1,and threshold is uniformly sampled at intervals of 0.01. You need to ensure the confidences distribute in [0,1] as uniformly as possible for a more precise evaluation.

In addition, we provide the evaluation program code for better understanding the evaluation methods.

Terms and Conditions

License

Creative Commons License

Our dataset is based on the SemanticKITTI and SemanticPOSS dataset, and therefore we distribute the data under Creative Commons Attribution-NonCommercial-ShareAlike licence. You are free to share and adapt the data, but have to give appropriate credit and may not use the work for commercial purposes. Specifically you should cite the following work:

  @inproceedings{behley2019arxiv,
      author = {J. Behley and M. Garbade and A. Milioto and J. Quenzel 
                  and S. Behnke and C. Stachniss and J. Gall},
      title = {{SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences}},
      booktitle = {Proc. of the IEEE International Conf. on Computer Vision (ICCV)},
      year = {2019}}
  @inproceedings{pan2020semanticposs,
      author={Pan, Yancheng and Gao, Biao and Mei, Jilin and Geng, Sibo and Li, Chengkun and Zhao, Huijing},
      title={SemanticPOSS: A point cloud dataset with large quantity of dynamic instances},
      booktitle={2020 IEEE Intelligent Vehicles Symposium (IV)},
      pages={687--693},
      year={2020}}
  @inproceedings{geiger2012cvpr,
      author = {A. Geiger and P. Lenz and R. Urtasun},
      title = {{Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite}},
      booktitle = {Proc.~of the IEEE Conf.~on Computer Vision and Pattern Recognition (CVPR)},
      pages = {3354--3361},
      year = {2012}}

First phase

Start: Aug. 19, 2022, midnight

Competition Ends

Oct. 22, 2022, midnight

You must be logged in to participate in competitions.

Sign In
# Username Score
1 gebreawe 0.735
2 lidartp 0.727
3 ppq 0.726