FACTIFY5WQA

Organized by Surya17 - Current server time: Nov. 10, 2025, 7 p.m. UTC

Previous

Validation
Oct. 15, 2023, midnight UTC

Current

Test
Nov. 10, 2023, midnight UTC

End

Competition Ends
Never

Overview

Dataset released as part of the De-Factify 3 workshop in AAAI-23

 

Contemporary automatic fact-checking systems focus on estimating truthfulness using numerical scores which are not human-interpretable. A human fact-checker generally follows several logical steps to verify a verisimilitude claim and conclude whether it's truthful or a mere masquerade. Therefore, it is necessary to have an aspect-based (delineating which part(s) are true and which are false) explainable system that can assist human fact-checkers in asking relevant questions related to a fact, which can then be validated separately to reach a final verdict. In this task, we will release a 5W framework (who, what, when, where, and why) for question-answer-based fact explainability. To that end, we present a semi-automatically generated dataset called FACTIFY-5WQA.

Participation

Registered participants will be able to download the zipped dataset from the Participate > Get Data tab. Please ensure you fill this form when registering for the shared task: Google Form. We thank you for your participation and best of luck! All teams/participants will be invited to submit a paper describing their system. Accepted papers will be published in formal proceedings.

For more information or in case of issues/queries please mail us at defactifyaaai@gmail.com

Competition Evaluation

Official Competition Metric: Since this task has three target variables, we will use an average BLEU score of the answers to the questions from the claim and evidence. Only if this score is greater than a set threshold and the label i.e. Support/Neutral/Refute is correct, do we consider that as a right prediction. The final accuracy is computed as the percentage of right predictions.

Each participating team will initially have access to the training data and validation data only. Later, the unlabeled test data will be released. Please ensure you fill the form given in the overview section.

Results are to be submitted in json file named "answer.json" in a zipped format. Refer to the starting kit for more details.

Terms and Conditions

By submitting results to this competition, you consent to the public release of your scores at the constraint workshop and in the associated proceedings, at the task organizers' discretion. Scores may include but are not limited to, automatic and manual quantitative judgments, qualitative judgments, and such other metrics as the task organizers see fit. You accept that the ultimate decision of metric choice and score value is that of the task organizers.

You further agree that the task organizers are under no obligation to release scores and that scores may be withheld if it is the task organizers' judgment that the submission was incomplete, erroneous, deceptive, or violated the letter or spirit of the competition's rules. Inclusion of a submission's scores is not an endorsement of a team or individual's submission, system, or science.

You further agree that your system may be named according to the team name provided at the time of submission, or to a suitable shorthand as determined by the task organizers.

By downloading the data or by accessing it any manner, you agree not to redistribute the data except for the purpose of non-commercial and academic-research. The data must not be used for providing surveillance, analyses or research that isolates a group of individuals or any single individual for any unlawful or discriminatory purpose.

Please ensure you fill this form when registering for the shared task: Google Form

Competition Schedule

  • 13 October 2023 : Release of the training set.
  • 8 November 2023 : Release of the test set.
  • 30 November 2023 : Deadline for submitting the final results.
  • 3 December 2023 : Announcement of the results.
  • 10 December 2023 : System paper submission deadline (All teams are invited to submit a paper).
  • 20 December 2023 : Notification of system papers.
  • 25 December 2023 : Camera ready submission.

Validation

Start: Oct. 15, 2023, midnight

Description: Validation phase: create models and submit results on validation data; feed-back are provided on the validation set only.

Test

Start: Nov. 10, 2023, midnight

Description: Test phase: use created models and submit results on test data; feed-back are provided on the test and validation sets.

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In