Since the gt of the test data are public, some incredible results appear on the result list, and the list has lost its reference value.
Posted by: zach.duan @ Feb. 25, 2024, 1:59 p.m.agree deeply.
Posted by: sissuire @ Feb. 28, 2024, 1:27 p.m.Hello, this is just a placeholder benchmark. I am afraid not everyone understand research. The idea was to reproduce/extend here the benchmark from the paper.
I remind everyone that the final benchmark and winners will be selected according to a private unseen testset from our dxo colleagues, meaning that not cheating is possible.
For this reason, the final submission consists on the factsheet and code+models to produce the results.
- Organizers
Posted by: nanashi @ March 1, 2024, 4:33 p.m.Hello, while using a private test set prevents cheating during evaluation, how will you ensure participants don't pre-train their models on the entire dataset prior to submission? Are participants allowed to train their models on the complete dataset before submitting?
Posted by: xqwang @ March 15, 2024, 1:40 a.m.