DAMON challenge: 3D body contact prediction from 2D images

Organized by shashank.tripathi123 - Current server time: March 30, 2025, 4:49 a.m. UTC

Current

First phase
Feb. 15, 2025, midnight UTC

End

Competition Ends
May 30, 2025, 11:59 p.m. UTC

Welcome to the DAMON Body Contact Annotation Challenge!

Understanding how humans use physical contact to interact with the world is a key step toward human-centric artificial intelligence. While inferring 3D contact is crucial for modeling realistic and physically-plausible human-object interactions, existing methods either focus on 2D, consider body joints rather than the surface, use coarse 3D body regions, or do not generalize to in-the-wild images. In this challenge, we wish to spur research towards dense 3D contact estimation from images and examine how well existing methods can infer dense vertex-level 3D contact on the full body surface from in-the-wild images. The publicly released DAMON dataset enables for the first time, an in-the-wild benchmark for this task. Different from the parallel track, this track focuses on predicting all 3D contact regions on the human body given an image.

The winner will be invited to give a talk in our CVPR'25 Rhobin Workshop. In addition to the DAMON challenge track, the RHOBIN challenge is split into seven separate tracks focusing on reconstruction of human-object interaction from monocular RGB images:

  1. 3D human reconstruction from monocular RGB images. (competition website)
  2. 6DoF object pose estimation from monocular RGB images. (competition website)
  3. Joint human and object reconstruction from monocular RGB images - template-based. (competition website)
  4. Joint human and object reconstruction from monocular RGB images - template free (competition website)
  5. Joint tracking of human and object from monocular RGB video. (competition website)
  6. Estimating body contacts from single RGB images. (You are here!)
  7. Estimating semantic contacts from single RGB images. (competition website)

About the data

The DAMON dataset is a collection of vertex-level 3D contact labels on the SMPL/SMPL-X mesh paired with color images of people in unconstrained environments with a wide diversity of human-scene and human-object interactions. Participants are allowed to train their methods on the DAMON training and validation sets, or any other datasets EXCEPT the DAMON test set. We have mechanisms in place to detect overfitting to the test set, and any such submissions will be disqualified. Both direct contact estimation and methods estimating contact by thresholding the geometric distance between reconstruction human and scene/objects are encouraged to participate. For convenience, we provide the following links to download the DAMON dataset:

By downloading the dataset, you agree to the DAMON dataset license.

Submission

It is NOT mandatory to submit a report for your method. However, we DO encourage you to fill in this form about the additional training data you used. Each participant is allowed to submit a maximum 5 times per day and 100 submissions in total. Participants must pack their results into one pkl file named as results.pkl and submit it as zip file. The pkl data should be organized as follows:

{
    "image_id": { # image_id is the image name as in the DAMON test set npz. The evaluation will fail if it doesn't match.

        # results for generalized 3D contact prediction from 2D images
        "gen_contact_vids": [N], # binary contact prediction for each vertex, N is the number of vertices (=6890)
    },
    ...
}

Example submissions can be found here. Please follow the same image_id format as in the example submission.

Note: if you method outputs contacts on the SMPL-X mesh, please use the script provided in the DECO repository here to convert them to SMPL format. The competition only supports 6980 contact vertices as inputs in the SMPL format.

Evaluation

For more details about the evaluation metrics, please refer to the Evaluation section.

Reference

More details about the DAMON dataset can be found in the DECO paper and website.

@InProceedings{tripathi2023deco,
    author    = {Tripathi, Shashank and Chatterjee, Agniv and Passy, Jean-Claude and Yi, Hongwei and Tzionas, Dimitrios and Black, Michael J.},
    title     = {{DECO}: Dense Estimation of {3D} Human-Scene Contact In The Wild},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2023},
    pages     = {8001-8013}
}

Evaluation Criteria

The evaluation supports contact prediction for the 6890 SMPL vertices. To convert contacts on the SMPL-X vertices to SMPL, please use this script. The contact estimation performance is evaluated according to the following metrics:

1) Precision: the percentage of correctly predicted contacts among all predicted contacts.

2) Recall: the percentage of correctly predicted contacts among all ground truth contacts.

3) F1 score: the harmonic mean of precision and recall.

4) Geodesic error (in cm): for each vertex predicted in contact, its shortest geodesic distance to a ground-truth vertex in contact. If the predicted vertex is true positive, this distance is zero, If not, this distance indicates the amount of prediction error along the body.

Ranking

Although the main purpose of the challenge is not to rank or order algorithms, it is better to standarize rankings than having multiple criteria in papers.

We will invite one winner to give a talk at the workshop.

We will use F1 Score and Geo. Error for the final rankings. To merge F1 score and Geodesic Error into one scalar, we average the ranking order in each metric. E.g., if I am 1st in one metric and 3rd in a second metric, the average will be 2.

Evaluation Code

The evaluation code can be found in this github repo.

Terms and Conditions

To participate in the competition, you must agree to the following terms and conditions: DAMON dataset license.

First phase

Start: Feb. 15, 2025, midnight

Competition Ends

May 30, 2025, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In
# Username Score
1 shashank.tripathi123 65.29
2 luxiaoguo 62.45
3 xsix9 62.45