The Semi-Supervised DAVIS Challenge on Video Object Segmentation @ CVPR 2020

Organized by scaelles - Current server time: Jan. 10, 2025, 1:19 a.m. UTC

Current

Test-dev
April 15, 2019, midnight UTC

End

Competition Ends
Dec. 31, 2026, 11:59 p.m. UTC

Welcome to the 2020 Semi-Supervised DAVIS Challenge!

Update!  Find the final test-challenge leaderboard results in the DAVIS website.

This is the submission site for the Semi-Supervised 2020 DAVIS Challenge on Video Object Segmentation. You can find more details about the challenge, dataset, prizes and rules in the DAVIS website.


Please cite the following papers if you participate in the challenge:

@article{Caelles_arXiv_2019,
  author = {Sergi Caelles and Jordi Pont-Tuset and Federico Perazzi and Alberto Montes and Kevis-Kokitsi Maninis and Luc {Van Gool}},
  title = {The 2019 DAVIS Challenge on VOS: Unsupervised Multi-Object Segmentation},
  journal = {arXiv:1905.00737},
  year = {2019}
}
@article{Pont-Tuset_arXiv_2017,
  author = {Jordi Pont-Tuset and Federico Perazzi and Sergi Caelles and Pablo Arbel\'aez and Alexander Sorkine-Hornung and Luc {Van Gool}},
  title = {The 2017 DAVIS Challenge on Video Object Segmentation},
  journal = {arXiv:1704.00675},
  year = {2017}
}
@inproceedings{Perazzi2016,
author = {F. Perazzi and J. Pont-Tuset and B. McWilliams and L. {Van Gool} and M. Gross and A. Sorkine-Hornung},
title = {A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation},
booktitle = {Computer Vision and Pattern Recognition},
year = {2016}
}

Evaluation Criteria

In the semi-supervised task, the segmentation of the first frame is given to the methods. Even though the class of each object was recently released, it should not be used as additional information in the methods.

The evaluation of the results will be the mean over all objects for two different metrics: Region Similarity (J) and Contour Accuracy (F). The final metric will be the mean of both metrics. Both measures were presented in the original CVPR 2016 DAVIS paper.

Terms and Conditions

Please check the DAVIS website for details about terms and conditions.

Please cite the following papers if you participate in the challenge:

@article{Caelles_arXiv_2019,
  author = {Sergi Caelles and Jordi Pont-Tuset and Federico Perazzi and Alberto Montes and Kevis-Kokitsi Maninis and Luc {Van Gool}},
  title = {The 2019 DAVIS Challenge on VOS: Unsupervised Multi-Object Segmentation},
  journal = {arXiv:1905.00737},
  year = {2019}
}
@article{Pont-Tuset_arXiv_2017,
  author = {Jordi Pont-Tuset and Federico Perazzi and Sergi Caelles and Pablo Arbel\'aez and Alexander Sorkine-Hornung and Luc {Van Gool}},
  title = {The 2017 DAVIS Challenge on Video Object Segmentation},
  journal = {arXiv:1704.00675},
  year = {2017}
}
@inproceedings{Perazzi2016,
author = {F. Perazzi and J. Pont-Tuset and B. McWilliams and L. {Van Gool} and M. Gross and A. Sorkine-Hornung},
title = {A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation},
booktitle = {Computer Vision and Pattern Recognition},
year = {2016}
}

Test-dev

Start: April 15, 2019, midnight

Competition Ends

Dec. 31, 2026, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In
# Username Score
1 wangchenxu 0.533
2 PGSmall 0.617
3 Jaejoon 0.309