In this thread we gather the clarifications required by participating team, to ensure convenient notifications to everybody.
Posted by: FRCSyn @ Sept. 22, 2023, 11:18 a.m.We want to clarify that there are no restrictions on the choice of face detection methods for use in the competition.
Posted by: FRCSyn @ Sept. 22, 2023, 11:19 a.m.Only the provided datasets can be used for training (synthetic and/or real according to the sub-task). Generative models cannot be used to generate more data. However, traditional data augmentation techniques (e.g., flip, rotate) which do not involve generative models are allowed.
Posted by: FRCSyn @ Sept. 22, 2023, 11:21 a.m.The DCFace distribution that can be used in the competition is the one with 0.5M images, as specified in the initial email sent to all participants.
Posted by: FRCSyn @ Sept. 22, 2023, 11:22 a.m.For training, we specify the allowed datasets in each sub-task, and you are free to combine them in any manner you prefer. Even if not designed for face verification, FFHQ can be considered in the proposed challenge for many aspects such as training a model for feature extraction and apply domain adaptation, among many other possibilities.
Posted by: FRCSyn @ Oct. 11, 2023, 1:31 p.m.Only the datasets indicated for training in the specific sub-task can be used to compute the threshold of your face recognition system.
Posted by: FRCSyn @ Oct. 11, 2023, 1:31 p.m.As stated in the tab 'Results' from the beginning of the FRCSyn Challenge:
"To determine the winners of sub-tasks 1.1 and 1.2 we consider Trade-off Accuracy, defined as the difference between the average and standard deviation of accuracy across demographic groups. To determine the winners of sub-tasks 2.1 and 2.2 we consider the average of verification accuracy across datasets".
If the order of results in the tables deviates from the above criteria, simply click on the relevant metric name (es: "Trade-off Accuracy (AVG - STD) [%]" in the first table) to sort the results accordingly.
Posted by: FRCSyn @ Oct. 17, 2023, 4:13 p.m.Only databases indicated for TRAINING in the specific sub-task can be used to compute the threshold of your face recognition system.
Please note that the comparison files provided for evaluation cannot be used to fix the threshold (this is why they do not contain any labels). In this sense, specific thresholds may be utilized for distinct groups, provided that they are established during the TRAINING process, and the assignment of the evaluated images to their respective groups is not determined based on the filename.
In case you decide to utilize multiple thresholds for a single system (e.g., for different ethnicities), we will request you to provide a detailed explanation of the methodology employed to fix the thresholds.
Posted by: FRCSyn @ Oct. 23, 2023, 8:18 a.m.Alternative datasets (e.g. LFW) are not considered for validation.
Participants are required to use data from the allowed training sets.
Posted by: FRCSyn @ Oct. 25, 2023, 8:18 a.m.