DeepFake Game Competition (DFGC) @ IJCB 2022 - Detection Track (NEW)

Organized by bob_peng - Current server time: Nov. 6, 2025, 6:14 p.m. UTC
Reward $8,000

First phase

Development-1
March 15, 2022, 2 a.m. UTC

End

Competition Ends
May 15, 2022, 11:59 p.m. UTC

News

2022.6.13: Top-3 teams are announce in the "Evaluation" page here.

2022.4.26: Important notices on the final phases here.

2022.4.26: The 3rd update of the testing data is available.

2022.4.11: The 2nd update of the testing data is available.

2022.3.29: The 1st update of the testing data is available.

Overview

DeepFake is developing fast, and realistic face-swaps are growingly deceiving and hard to be detected. On the contrary, DeepFake detection methods are also improving. There is a two-party game between DeepFake creators and defenders. We are organizing this competition to provide a common platform for benchmarking the game between current state-of-the-art DeepFake creation and detection methods. The main research question to be answered by this competition is the current status of the two adversaries against each other. This is the second edition after the DFGC-21 held in the last year, with many improvements. By organizing this competition, we hope to further stimulate research ideas to build better defenses against DeepFake threats.

This is the DeepFake detection track of DFGC-22. For the creation track, please see here.

Dataset

The test dataset for the DeepFake detection track will be drawn from our collected material videos (as real samples) and from the face swap videos created both by the creation track participants and by the organizers (as fake samples). These videos may undergo random post-processing operations to imitate real-world video quality degradation during transmission. For the final testing stage, additional real and fake videos from other sources may also be included for more data diversity.

After registration to this track, you can download the dataset from "Participate >> Get Data"

Competition Protocols

The competition is composed of two adversarial tracks: DeepFake creation and DeepFake detection, which are conducted in parallel.

For the Deepfake detection track, participants are required to develop their detection methods and models using any publicly available datasets, except the provided competition datasets. The data collected and created in this competition CANNOT be used for training or developing detection models, and they are only for testing. There are two stages in the detection track. The first one is the validation stage, where the models are tested on the public test set. The second one is the final stage, where the models are tested on the private test set. The public and the private test sets have disjointed IDs, they absorb submissions from the creation track and will be updated after each evaluation round of the creation track. This process makes our competition a dynamic game between the two tracks.

During the validation stage, the public test set, which is composed of real and fake video clips without groundtruth labels, will be released to the detection track participants for producing inference results. The inference results, in the form of text format indicating the predicted fake probability of each video clip, need to be submitted to the competition platform for evaluation, and the leaderboard (LB) will be automatically updated with each team's best score. After the validation stage ends, the top-10 teams on the LB will enter the final stage, where these models will be tested on the private set to obtain the final score. The final stage models need to be contained in a docker and sent to the organizers to complete the final stage testing. Potential winning teams need to send their training code and training data, accompanied with a technical report, to the organizers to check the reproducibility and the compliance with the competition rules.

Timeline

3.15 - 5.4 5.7:  The validation phase. Up to 2 submissions can be made in each day.

5.5 5.8- 5.15: The final phase. Top-10 teams on the LB send their LB models to the organizers.

Awards

Top-3 winners in the final phase will receive award certificates and a bonus.
A total of 50,000 CNY (around 8,000 USD) bonus will be rewarded to winners in the two tracks of DFGC-22.
The Top-3 of this track will receive: 12,000 CNY, 8,000 CNY, 5,000 CNY

Organizing Committee

Chair
Prof. Jing Dong, IEEE Senior Member, from CASIA

Members
Dr. Bo Peng, from CASIA
Prof. Wei Wang, from CASIA
Prof. Zhen Lei, IAPR Young Biometrics Investigator Awardee, from CASIA
Prof. Zhenan Sun, IAPR Fellow, from CASIA
Prof. Siwei Lyu, IEEE Fellow, from University at Buffalo, SUNY

 

Sponsor

Tianjin Academy for Intelligent Recognition Technologies

Evaluation Methods

The evaluation of the detection track will use the area under ROC curve which is a classical metric for two-class classification problems.

Final Results 

The Top-3 teams are:

HanChen, OPDAI, guanjz

Their results are listed in the following:

Team Public Private-1 Private-2   Private     Rank
HanChen 0.9521 0.9178 0.8955 0.9085 1
OPDAI 0.9297 0.8836 0.8511 0.8672 2
guanjz 0.9483 0.911 0.7461 0.8670 3

The final rank is determined by the performance on the private test set.

The private set is composed of two subsets: the private-1 with 2166 real and fake clips, and the private-2 with 895 real and fake clips. The private-1 was created using the same methods as the public set with different IDs, while the private-2 was collected from in-the-wild data.

The Top-13 teams of the Dev-3 phase were requested to send their inference codes and dockers to the organizers for the final phase test, and 8 teams fed back at last. After checking the training code and training data of the top-3 teams, the results are announced.

If you have further questions, please contact us at dfgc_2022 at 163.com

Terms

Each team should use only one Codalab account to register and submit in this competition. Please use the email address of your affiliation to enroll. After clicking register, please also send an email with the registering email account to dfgc_2022 at 163.com stating:
1. Your affiliation.
2. Real names of all team members.
3. The name of your adviser, if you are a student.
Otherwise, your enrollment for this competition may be refused.

Private sharing between different teams is not allowed.

Using multiple accounts is cheating and is forbidden.

By downloading the provided dataset, you agree to use this data only for non-profit research and educational purposes.

Potential winners agree to assist the organizers to check the training and testing code for reproducibility and abidance to the rules.

Winners are encouraged to open-source their solutions and they need to give a technical report on the IJCB competition workshop.

The organizing committee reserves the rights for explaining and amending the terms and the rights to disqualify controversial results and teams. If you have any questions, please post it in the forum, or send an email to dfgc_2022 at 163.com .

Important Rules

For the detection track, participants self-created face swap data is a very special kind of data augmentation that is treated as completely new Deep Fake datasets in this competition, whose legibility depends on their public availableness. To use this kind of data (augmentation) for training or developing models, they must be made publicly available before 2022.4.15 and be explicitly claimed in the competition forum to make other participants aware of this new dataset.

For the detection track, this competition focuses on the general scenario of detecting DeepFake of unknown people in unknown backgrounds. Thus, the facial ID information of the competition videos cannot be memorized explicitly or implicitly in any form in the detection model or method. Similarly, the background information like the scene or cloth or manually added watermarks cannot be used for the detection either. Since our material videos of all 40 actors and the metadata are made public in the DeepFake creation track, we must forbid the potential misusage of this kind of confounder information in the detection track, which will make the detection methods tailored only for this competition while useless elsewhere.

The dataset provided in this competition CANNOT be used for training or developing detection models, and they are only for testing. 

Do not hand-label the released testing dataset for submission or training.

FAQs

 

Development-1

Start: March 15, 2022, 2 a.m.

Description: Development phase-1

Development-2

Start: April 12, 2022, 1 a.m.

Description: Development phase-2

Development-3

Start: April 27, 2022, 1 a.m.

Description: Development phase-3

Final

Start: May 7, 2022, 11:59 p.m.

Description: Final phase

Competition Ends

May 15, 2022, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In