MOE is a large scale LiDAR datasets containing point-wise motion labels for the LiDAR data. It focuses on motion label of moving objects. It provides both indoor and outdoor sequences with high-density of moving objects. For more information, please refer to the project page The MOE Dataset. In this competition, the participants are required to submit the predicted motion label for each point of the test sequences 05-09. That is the input for your detection algorithm is a bunch of 3D coordinates. You algorithm should output a motion label for each point, indicating whether it is moving or not ( 0 for non-moving, 1 for moving).
sample_submission.zip ├── 05 │ └── predictions │ ├ 000000.txt │ ├ 000001.txt │ ├ ... ├── 06 │ └── predictions │ ├ 000000.txt │ ├ 000001.txt │ ├ ... ├── 07 . . . └── 09Evaluation Metric To assess the labeling performance, we rely on the commonly applied Jaccard Index or intersection-over-union (mIoU) metric over moving points on all test sequences.
We adopt similar submission policies as SemanticKITTI: Moving Object Segmentation
Only the training set is provided for learning the parameters of the algorithms. The test set should be used only for reporting the final results compared to other approaches - it must not be used in any way to train or tune systems, for example, by evaluating multiple parameters or feature choices and reporting the best results obtained. Thus, we impose an upper limit (currently 10 attempts) on the number of submissions. It is the participant's responsibility to divide the training set into proper training and validation splits, e.g., we use sequence 04 for validation. The tuned algorithms should then be run - ideally - only once on the test data and the results of the test set should not be used to adapt the approach.
The evaluation server may not be used for parameter tuning since we rely here on a shared resource that is provided by the Codalab team and its sponsors. We ask each participant to upload the final results of their algorithm/paper submission only once to the server and perform all other experiments on the validation set. If participants would like to report results in their papers for multiple versions of their algorithm (e.g., parameters or features), this must be done on the validation data and only the best performing setting of the novel method may be submitted for evaluation to our server. If comparisons to baselines from third parties (which have not been evaluated on the benchmark website) are desired, please contact us for a discussion.
Important note:
It is NOT allowed to register multiple times to the server using different email addresses.
We are actively monitoring submissions and we will revoke access and delete submissions.
When registering with Codalab, we ask all participants to use a unique institutional email address (e.g., .edu) or company email address.
We will not approve email addresses from free email services anymore (e.g., gmail.com, hotmail.com, qq.com).
Start: March 9, 2024, midnight
Never
You must be logged in to participate in competitions.
Sign In# | Username | Score |
---|---|---|
1 | deepduke | 0.01 |