STPLS3D Instance Segmentation

Organized by Meida_Chen - Current server time: Jan. 9, 2025, 11:36 a.m. UTC

Current

Instance Segmentation
April 12, 2022, midnight UTC

End

Competition Ends
Never
Please note that we will not approve accounts with email addresses from free email providers, e.g., gmail.com, qq.com, web.de, etc. Only university or company email addresses will get access, please refer to the Terms and Conditions. In addition, submissions without method descriptions will not be considered in the competition and will not be eligible for the prize award, please refer to the Evaluation.

STPLS3D Instance Segmentation Challenge @ ICCV2023: The 3rd Challenge on Large-Scale Point Clouds Analysis for Urban Scenes Understanding

Collocated with ICCV 2023

This competition aims to explore how to achieve instance segmentation of urban-scale 3D point clouds. Specifically, the STPLS3D benchmark is used to provide synthetic urban scale point clouds with high-quality instance annotations. The semantic categories, spatial scale, and geometrical topology of this dataset are quite different from the existing indoor dataset such as S3DIS and ScanNet. Can existing techniques be successfully scaled to this dataset? What is the main challenge of instance segmentation of urban-scale point clouds?

STPLS3D is a large-scale photogrammetry 3D dataset, which is composed of high-quality, rich-annotated point clouds from real-world and synthetic environments. The datasets can be used for benchmarking both semantic and instance segmentation algorithms. The synthetic dataset covers about 16 km2 of the city landscape, with up to 18 fine-grained semantic classes and 14 instance classes. Note that only the synthetic data v3 is used for training and testing in this competition. Please refer to our project pageGitHub, and paper for more details.

For fairness, all participants can only use the released STPLS3D v3 dataset to train networks. It is not allowed to pretrain the models on any other public or private datasets. In case the unlabelled testing split is used during training, the participant should clearly specify the experimental settings in submission.

For other users who do not participate in the challenge, it is free to use our dataset in combination with others for their own research purpose.

We are thankful to USC-ICT to sponsor the following prizes. The prize award will be granted to the Top 3 individuals and teams on the leaderboard that provides a valid submission.

  • 1st Place:
$1,500 USD
  • 2nd Place:
$1,000 USD
  • 3rd Place:
$500 USD

 

If you find our work useful in your research, please consider citing:

@article{chen2022stpls3d,
  title={STPLS3D: A Large-Scale Synthetic and Real Aerial Photogrammetry 3D Point Cloud Dataset},
  author={Chen, Meida and Hu, Qingyong and Hugues, Thomas and Feng, Andrew and Hou, Yu and McCullough, Kyle and Soibelman, Lucio},
  journal={arXiv preprint arXiv:2203.09065},
  year={2022}
}

Evaluation

Training and Testing Data

Training point clouds can be downloaded from here.

Testing scenes can be downloaded from here.

We are also providing the pre-processed testing files in .pth format which split the original testing scenes into 50m x 50m blocks here. Note that, the submission should correspond to the 300 blocks but not the 3 testing scenes.

For simplicity, we provide our baseline implementation of HAIS here. You can follow the instructions under "4.2 Instance segmentation" to prepare the submission.

An example of the submission can be downloaded here.

Submission Format

The Submission format is similar to ScanNet - 3D Semantic Instance Prediction.

You have to provide a single zip containing 300 .txt prediction files corresponding to each (50m x 50m) block of the test scenes and a subfolder containing the predicted binary mask files.

Each prediction file (e.g., 26_points_GTv3_00_inst_nostuff.txt) should contain a list of instances, where an instance is: (1) the relative path to the predicted binary mask file, (2) the integer semantic label, (3) the float confidence score. Each line in the prediction file should correspond to one instance, and the three values above separated by spaces. Thus, the filenames in the prediction files must not contain spaces. The prediction files must be named according to the corresponding test blocks, please use our pre-processed .pth files as reference

The contents of the zip file should be organized like this:

    zip
    ├── 26_points_GTv3_00_inst_nostuff.txt
    ├── 26_points_GTv3_01_inst_nostuff.txt
    ...
    ├── 27_points_GTv3_00_inst_nostuff.txt
    ├── 27_points_GTv3_01_inst_nostuff.txt
    ...
    ├── 28_points_GTv3_098_inst_nostuff.txt
    ├── 28_points_GTv3_099_inst_nostuff.txt
    ...
    └── predicted_masks
        ├── 26_points_GTv3_00_inst_nostuff_000.txt
        ├── 26_points_GTv3_00_inst_nostuff_001.txt
        ...
        ├── 28_points_GTv3_099_inst_nostuff_013.txt
        └── 28_points_GTv3_099_inst_nostuff_014.txt

Here is an example of the prediction file 26_points_GTv3_00_inst_nostuff.txt which referenced to the binary instance mask 26_points_GTv3_00_inst_nostuff_000.txt and 26_points_GTv3_00_inst_nostuff_001.txt under a subfolder named predicted_masks:

predicted_masks/26_points_GTv3_00_inst_nostuff_000.txt 4 0.8188
predicted_masks/26_points_GTv3_00_inst_nostuff_001.txt 4 0.1364
...

and predicted_masks/26_points_GTv3_00_inst_nostuff_000.txt could look like:

0
0
0
1
1
...
1
0
0

Please include a description.description file with the following content:

    method name:
    method description:
    project url:
    publication url:
    bibtex:
    organization or affiliation:
    email:

Important: Submitting the description file is required to get your result evaluated by our server. Submissions without method descriptions will not be considered in the final competition, not eligible for the prize award, and the result will be removed from the leaderboard. If the approach has been previously published, please include the publication URL, a detailed description of any improvements made, and the parameters used in this competition. Please also include any data augmentation techniques used, challenges, and issues you were facing.

Note: The upload of the zip file with your results takes some time and there is (unfortunately) no indicator for the status of the upload. You will just see that is being processed upon successful uploading your data.

Evaluation Criterion

To assess the segmentation performance, we rely on the commonly applied mAP.

Terms and Conditions

Submission Policy

Only the training set is provided for learning the parameters of the algorithms. The testing set should be used only for reporting the final results compared to other approaches - it must not be used in any way to train or tune systems, for example, by evaluating multiple parameters or feature choices and reporting the best results obtained. Thus, we impose an upper limit (currently 5 attempts) on the number of submissions. It is the participant's responsibility to divide the training set into proper training and validation splits. The tuned algorithms should then be run - ideally - only once on the test data and the results of the test set should not be used to adapt the approach.

The evaluation server may not be used for parameter tuning since we rely here on a shared resource that is provided by the Codalab team and its sponsors. We ask each participant to upload the final results of their algorithm/paper submission only once to the server and perform all other experiments on the validation set. If participants would like to report results in their papers for multiple versions of their algorithm (e.g., parameters or features), this must be done on the validation data and only the best performing setting of the novel method may be submitted for evaluation to our server. If comparisons to baselines from third parties (which have not been evaluated on the benchmark website) are desired, please contact us for a discussion.


Important note: It is NOT allowed to register multiple times to the server using different email addresses. We are actively monitoring submissions and we will revoke access and delete submissions. When registering with Codalab, we ask all participants to use a unique institutional email address (e.g., .edu) or company email address. We will not approve email addresses from free email services anymore (e.g., gmail.com, hotmail.com, qq.com). If you need to use such an email address, then contact us to approve your account.

License

Creative Commons License

The provided dataset is based on the data under Creative Commons Attribution-NonCommercial-ShareAlike . You are free to share and adapt the data, but have to give appropriate credit and may not use the work for commercial purposes.

 

Specifically, you should consider citing our work:

 @article{chen2022stpls3d,
  title={STPLS3D: A Large-Scale Synthetic and Real Aerial Photogrammetry 3D Point Cloud Dataset},
  author={Chen, Meida and Hu, Qingyong and Hugues, Thomas and Feng, Andrew and Hou, Yu and McCullough, Kyle and Soibelman, Lucio},
  journal={arXiv preprint arXiv:2203.09065},
  year={2022}
}

For more information, please visit our project pageGitHub, and paper.

How to Participate

Before you can submit your first results, you need to register with CodaLab and login to participate. Only then you can submit results to the evaluation server, which will score your submission on the non-public test set.

Important note: It is NOT allowed to register multiple times to the server using different email addresses. We are actively monitoring submissions and we will revoke access and delete submissions. When registering with Codalab, we ask all participants to use their unique institutional email address (e.g., .edu) or company email address. We will not approve email addresses from free email services anymore (e.g., gmail.com, hotmail.com, qq.com). If you need to use such an email address, then contact us to approve your account.

Steps

  1. Prepare your submission in the required format, as described under the Evaluation section. CodaLab expects you to upload a single zip.
  2. Go to Participate and the Submit / View Results page.
  3. Enter the required fields, where you can supply also later more details.
  4. Then you have to click "Submit" in the lower part of the page, which will open a file dialog. In the file dialog, you have to select your submission zip file, which will be then uploaded.
    Important: Don't close the window or tab until you see that a row has been added to the table under the "submit" button.
  5. The evaluation takes roughly 10 minutes to complete and you will have the choice, of which of your submission gets added to the leaderboard.

Good luck with your submission!

Qingyong Hu, University of Oxford

Meida Chen, University of Southern California - Institute for Creative Technologies

Tai-Ying Cheng, University of Oxford

Sheikh Khalid, Sensat

Bo Yang, The Hong Kong Polytechnic University

Ronald Clark, Imperial College London

Yulan Guo, National University of Defense Technology

Ales Leonardis, University of Birmingham

Niki Trigoni ,University of Oxford

Andrew Markham, University of Oxford

  • 04/1/2023: Competition starts
  • 9/24/2023: Competition ends
  • 9/29/2022:Decision to Participants
  • 10/2/2023:Workshop(Half-day)

Please contact Meida Chen at mechen@ict.usc.edu if you have any questions.

Instance Segmentation

Start: April 12, 2022, midnight

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 USTC-IAT-United 0.6940
2 lucye 0.6879
3 Subury 0.6612