SemanticKITTI: Moving Object Segmentation

Organized by jbehley - Current server time: May 14, 2025, 7:31 p.m. UTC

Current

Final Phase
Feb. 1, 2021, midnight UTC

End

Competition Ends
Never
Please note, we will not approve accounts with email addresses from free email providers, e.g., gmail.com, qq.com, web.de, etc. Only university or company email addresses will get access. See also our terms and conditions.

SemanticKITTI: Motion Segmentation

SemanticKITTI is a large-scale dataset providing point-wise labels for the LiDAR data of the KITTI Vision Benchmark. It is based on the odometry task data and provides annotations for 28 classes, including labels for moving and non-moving traffic participants. Please visit www.semantic-kitti.org for more information.

In this competition, one has to provide motion labels for each point of the test sequences 11-21. Therefore, the input to all evaluated methods is a list of coordinates of the three-dimensional points along with their remission, i.e., the strength of the reflected laser beam which depends on the properties of the surface that was hit. Each method should then output a label for each point of a scan, i.e., one full turn of the rotating LiDAR sensor. Here, we only distinguish between static and moving object classes.

Evaluation

Data Format

Similar to the training data, you have to provide a single zip containing a folder "sequences". The sequence folder contains sub-folders "11", "12", ..., "21", which contain a folder "predictions". There one has to provide for each scan a label file in binary format containing for each point an unsigned int (32-bit) with the label.

The contents of the zip-file should be organized like this:

    [description.txt]
    sequences
    ├── 11
    │   └── predictions
    │         ├ 000000.label
    │         ├ 000001.label
    │         ├ ...
    ├── 12
    │   └── predictions
    │         ├ 000000.label
    │         ├ 000001.label
    │         ├ ...
    ├── 13
    .
    .
    .
    └── 21

Optionally, you can have a text file "description.txt" with the following contents:

name:
pdf url:
code url:

The information gets parsed and then shown in the "detailed results" of the submission. The "name" refers to the name of your proposed approach, "pdf url" a link to the pdf file (if possible provide a direct link to the PDF), and "code url" with a link to the repository. All entries are optional and we will only parse this file if it's available.

It is strongly recommended to use the verification script of the SemanticKITTI API (available at github), since all submissions count towards the overall maximum number of submissions.

Note: The upload of the zip file with your results takes some time and there is (unfortunately) no indicator for the status of the upload. You will just see that is being processed upon successfully uploading your data.

Evaluation Criterion

To assess the labeling performance, we rely on the commonly applied Jaccard Index or intersection-over-union (mIoU) metric over moving and non-moving parts of the environment.

We map all moving-x classes of the original SemanticKITTI semantic segmentation benchmark to a single moving object class.

Terms and Conditions

Submission Policy

Only the training set is provided for learning the parameters of the algorithms. The test set should be used only for reporting the final results compared to other approaches - it must not be used in any way to train or tune systems, for example, by evaluating multiple parameters or feature choices and reporting the best results obtained. Thus, we impose an upper limit (currently 10 attempts) on the number of submissions. It is the participant's responsibility to divide the training set into proper training and validation splits, e.g., we use sequence 08 for validation. The tuned algorithms should then be run - ideally - only once on the test data and the results of the test set should not be used to adapt the approach.

The evaluation server may not be used for parameter tuning since we rely here on a shared resource that is provided by the Codalab team and its sponsors. We ask each participant to upload the final results of their algorithm/paper submission only once to the server and perform all other experiments on the validation set. If participants would like to report results in their papers for multiple versions of their algorithm (e.g., parameters or features), this must be done on the validation data and only the best performing setting of the novel method may be submitted for evaluation to our server. If comparisons to baselines from third parties (which have not been evaluated on the benchmark website) are desired, please contact us for a discussion.


Important note: It is NOT allowed to register multiple times to the server using different email addresses. We are actively monitoring submissions and we will revoke access and delete submissions. When registering with Codalab, we ask all participants to use a unique institutional email address (e.g., .edu) or company email address. We will not approve email addresses from free email services anymore (e.g., gmail.com, hotmail.com, qq.com). If you need to use such an email address, then contact us to approve your account.

License and Citations

Creative Commons License

Our dataset is based on the KITTI Vision Benchmark and therefore we distribute the data under Creative Commons Attribution-NonCommercial-ShareAlike license. You are free to share and adapt the data, but have to give appropriate credit and may not use the work for commercial purposes.

Specifically, you should cite one or both of our work:

@article{chen2021ral,
      title={{Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach
              Exploiting Sequential Data}},
      author={X. Chen and S. Li and B. Mersch and L. Wiesmann and J. Gall 
              and J. Behley and C. Stachniss},
      year={2021},
      journal={IEEE Robotics and Automation Letters(RA-L)},
      doi = {10.1109/LRA.2021.3093567},}
  @inproceedings{behley2019iccv,
      author = {J. Behley and M. Garbade and A. Milioto and J. Quenzel 
                  and S. Behnke and C. Stachniss and J. Gall},
      title = {{SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences}},
      booktitle = {Proc. of the IEEE/CVF International Conf. on Computer Vision (ICCV)},
      year = {2019}}

But also cite the original KITTI Vision Benchmark:

  @inproceedings{geiger2012cvpr,
      author = {A. Geiger and P. Lenz and R. Urtasun},
      title = {{Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite}},
      booktitle = {Proc.~of the IEEE Conf.~on Computer Vision and Pattern Recognition (CVPR)},
      pages = {3354--3361},
      year = {2012}}

For more information, please visit our website at http://www.semantic-kitti.org/.

How to Participate

Before you can submit your first results, you need to register with CodaLab and login to participate. Only then you can submit results to the evaluation server, which will score your submission on the non-public test set.

Steps

  1. Prepare your submission in the required format, as described under the Evaluation section. CodeLab expects you to upload a single zip.
  2. Use the validation script from the semantic-kitti-api to ensure that the folder structure and number of label files in the zip file is correct. All submissions count towards the overall maximum number of submissions!
  3. Go to Participate and the Submit / View Results page.
  4. Enter the required fields, where you can supply also later more details, if you need to take care of anonymity in case of double blind submissions.
  5. Then you have to click "Submit" in the lower part of the page, which will open a file dialog. In the file dialog, you have to select your submission zip file, which will be then uploaded.
    Important: Don't close the window or tab until you see that a row has been added in the table under the "submit" button.
  6. The evaluation takes roughly 10 minutes to complete and you will have the choice, which of your submission gets added to the leaderboard.

Good luck with your submission!

Final Phase

Start: Feb. 1, 2021, midnight

Description: Important: Uploading your results takes some time. Do not close the window before you see the status of your submission! You can find the old leaderboard at https://competitions.codalab.org/competitions/28894#results

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 CS-4D 83.5
2 stm-v1 81.5
3 jxLiang 80.6