Dear DFGC participants,
The Dev-3 phase will soon begin on April 27, 2022, 1 a.m., and the 3rd update of the testing data is available.
As we are approaching the final stages of the competition, we would like to declare the following IMPORTANT notes that should be followed to successfully complete the competition.
1. Remainder on the rules.
Please strictly follow the competition rules detailed in the “Overview” and “Terms and Conditions” pages. Any violation of these rules will lead to invalid competition results. Some notable rules include:
Only use publicly-available datasets for training/developing.
The competition datasets cannot be used for training/developing.
Do not memorize ID, background, or any other confounder information as shortcuts for this specific competition dataset.
Hand-labeling the competition dataset is not allowed.
A special notice on the rule that self-created faceswap/deepfake data needs to be publicly released in order to be used (note we have already passed the due time for this and no declarations were made). We clarify that data-augmentation codes that produce faceswap/deepfake data on-the-fly is not allowed in training, even if the codes are publicly available, because the produced faceswap data are not public. More specifically, faceswap-augmentations commonly refer to those that carefully align, splice and blend the facial areas and assign the produced data to the label 1 (or "fake"), e.g. Face X-ray. Other normal data augmentations are allowed. E.g., regular masking-out, mixup, cutmix, etc..
2. Docker environment for the final-phase testing.
To speed up the final-phase testing, where we will run top-10 teams’ inference codes on the private test set, we require inference codes to be tested in the same docker environment as provided by us. The manual for using the docker is here (https://gitee.com/bob-peng/DFGC-22/blob/main/docker_manual.txt). We recommend participants to test using this docker during the Dev-3 phase. If the default libraries do not meet your requirements, you can install the minimum number of new libraries based on this docker and indicate the installing commands in your submitted code package.
3. Which model we test in the final-phase.
ONLY your best-performing model on the Dev-3 LeaderBoard (LB) will be tested in the final phase, given that it enters top-10 of Dev-3 LB. After receiving your inference code, we will also run it on the Dev-3 public set locally and compare the result with the submitted one to rule-out potential cheating. So be careful of every submission you made in Dev-3, as each one may become the final to-be-tested one!
4. Procedures after the Dev-3 phase ends.
If your Dev-3 LB score is in top-10, please ASAP email us (dfgc_2022 at 163.com) the zip package containing your inference code and model, which should have been tested by you in the referred docker. This should be within the 24 hours after Dev-3 ends, i.e., before May 8, 2022, 11:59 p.m. UTC. We will use the following days to first run the testing on the private set and then contact potential top-3 teams for checking their training codes. The final ranking is determined by the score on the private set. If everything goes fine, we will release the final ranking on around May 16, 2022.
5. About training code checking
The participants are responsible to make their training code (and data) easy to be checked and results precisely reproducible. We recommend minimizing as much as possible random factors in your code and using clear and fixed stopping criterions. In case randomness still exists, multiple runs may be conducted.
Best regards,
DFGC-2022