Task11 @SemEval2023: Learning With Disagreements (Le-Wi-Di), 2nd edition

Organized by elisa.leonardelli - Current server time: April 19, 2024, 11:38 a.m. UTC

Previous

Evaluation Phase
Jan. 10, 2023, midnight UTC

Current

Post-competition
Jan. 31, 2023, 11:59 p.m. UTC

End

Competition Ends
Never
Download Size (mb) Phase
Public Data 0.007 #0 Pre-practice Phase
Starting Kit 0.003 #1 Practice Phase
Public Data 1.113 #1 Practice Phase
Public Data 0.340 #2 Evaluation Phase
Public Data 1.483 #3 Post-competition

Pre-practice Phase

Start: July 13, 2022, midnight

Description: In this initial phase, you are provided with a sample of the data for each dataset (see 'Get data sample' in the Overview section). Submissions in Codalab are not allowed. If you intend to partecipate, please join also our google group so to be updated with any news about the task.

Practice Phase

Start: Sept. 13, 2022, 7 a.m.

Description: This is the practice phase of the competition. You can partecipate to Codalab and access the "Participate" and "Results" sections. In this phase, you are provided with the training and validation parts of the datasets (though labels for validation will be provided in the next phase). You are expected to craft novel approaches for training the models using the crowd labels. Submissions are unlimited and are evaluated on the validation part (cross-entropy and micro-F1). Leaderboard is public so to allow participants to see how their model performs and compare the performance with others. You might find useful information about submission in the "Data format and submission data format" section. You might find useful snippets of code (how to load data and evaluate data) and an example of the submission files in the starting_kit.

Evaluation Phase

Start: Jan. 10, 2023, midnight

Description: In this phase of the competition, the (unlabelled) test data is released. Participants are expected to use the models trained in the practice phase to make predictions on the test data and make submissions for the test data. The number of submissions for this phase is limited to 5 submissions per participant, to prevent the participants from fine-tuning their models on the test data. However, in this phase we release the labels for the development set to facilitate quick offline testing and refining of models. You can learn more details about evaluation in the Evaluation section (Learn the Details tab). Please note that in the Leaderboard, the "entries" number differs from the number of valid submissions (which are max 5), because it counts also the failed ones.

Post-competition

Start: Jan. 31, 2023, 11:59 p.m.

Description: The official competition ends in the evaluation phase. However, the Post-competition phase allows you to continue to refine and test your models.

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 nasheedyasin 3.09
2 guneetsk99 5.38