WASSA 2023 Shared Task

Organized by wassa23codemixed - Current server time: May 15, 2025, 5:02 a.m. UTC

Previous

Development
Feb. 28, 2023, midnight UTC

Current

Final
April 15, 2023, midnight UTC

End

Competition Ends
Never

WASSA 2023 Shared Task on Multi-Label and Multi-Class Emotion Classification on Code-Mixed Text Messages

WASSA 2023: 13th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis will be held in conjunction with at ACL 2023 in Toronto, ON, Canada, July 14, 2023.

Cite this paper for the task: Ameer, Iqra, et al. 2022 Ameer, I., Sidorov, G., Gomez-Adorno, H., & Nawab, R. M. A. (2022). Multi-label emotion classification on code-mixed text: Data and methods. IEEE Access, 10, 8779-8789.

@article{ameer2022multi,
  title={Multi-label emotion classification on code-mixed text: Data and methods},
  author={Ameer, Iqra and Sidorov, Grigori and Gomez-Adorno, Helena and Nawab, Rao Muhammad
Adeel},  
  journal={IEEE Access},
  volume={10},
  pages={8779--8789},
year={2022},
  publisher={IEEE}
}

Join the official task mailing group: wassa23code-mixed@googlegroups.com

It is crucial that you join the mailing list to receive the latest news and updates. Also note that even if you join the mailing list now, you will be able to see all messages posted earlier.

Emotion is a concept that is challenging to describe. Yet, as humans, we understand the emotional effect situations have or could have on other people and us. How can we transfer this knowledge to machines? Is it possible to learn emotions they trigger automatically on a code-mixed (Roman Urdu
+ English) text message?

We propose the Shared Task on Multi-Label and Multi-Class Emotion Classification on Code-Mixed Text Messages, organized as part of WASSA 2023 at ACL 2023. This task aims to develop models that can predict emotion based on code-mixed (Roman Urdu and English) text messages.

Schedule

  • February 28th, 2023: Initial training data release
  • February 28th, 2023: Codalab competition website goes online, and development data released
  • April 15th, 2023: Evaluation phase begins: development labels test data released
  • April 18th, 2023April 25th 2023: Deadline submission of final result on Codalab
  • April 24th, 2023May 1st 2023: Deadline system description paper (max. 4p)
  • May 22nd, 2023: Notification of acceptance
  • June 6th, 2023: Camera-ready papers due

Task Description

Track 1 - Multi-Label Emotion Classification (MLEC): Given a code-mixed SMS message, classify it as 'neutral or no emotion' or as one, or more, of eleven given emotions that best represent the mental state of the author.

Track 2 - Multi-class Emotion Classification (MCEC): Given a code-mixed SMS message, classify it as 'neutral or no emotion' or as one of eleven given emotions that best represent the mental state of the author.

You are free to participate in any or both tracks. Further details on both of the tracks are provided below.

Track MLEC: This is a Multi-Label Emotion Classification Task

Given:

    a code-mixed SMS message

Task: classify the SMS message as 'neutral or no emotion' or as one, or more, of eleven given emotions that best represent the mental state of the author:

  • anger (also includes annoyance and rage) can be inferred
  • anticipation (also includes interest and vigilance) can be inferred
  • disgust (also includes disinterest, dislike and loathing) can be inferred
  • fear (also includes apprehension, anxiety, concern, and terror) can be inferred
  • joy (also includes serenity and ecstasy) can be inferred
  • love (also includes affection) can be inferred
  • optimism (also includes hopefulness and confidence) can be inferred
  • pessimism (also includes cynicism and lack of confidence) can be inferred
  • sadness (also includes pensiveness and grief) can be inferred
  • suprise (also includes distraction and amazement) can be inferred
  • trust (also includes acceptance, liking, and admiration) can be inferred

Track MCEC: This is a Multi-Class Emotion Classification Task

Given:

    a code-mixed SMS message

Task: classify the SMS message as 'neutral or no emotion' or as one of eleven given emotions (given in Track 1) that best represent the mental state of the author.

Organizers of the shared task:

Iqra Ameer
Assistant Professor of Computer Science
Division of Engineering and Science (Abington) Penn State University
PA, USA
E-mail: iqa5148@psu.edu

Necva Bolucu
Postdoctoral Research Fellow
CSIRO
E-mail: Necva.Bolucu@csiro.au

Ali Al Bataineh
Assistant Professor
Electrical and Computer Engineering
Norwich University, USA
E-mail: aalbatai@norwich.edu

Hua Hu
Professor
Section of Biomedical Informatics and Data Science, School of Medicine
Yale University, US
E-mail: hua.xu@yale.edu

Paper

Participants will be given the opportunity to write a system-description paper that describes their system, resources used, results, and analysis. This paper will be part of the official WASSA-2023 proceedings. The paper is to be four pages long plus two pages at most for references and should be submitted using the ACL 2023 Style Files (LaTeX style files) on ACL Rolling Review. The paper can contain an appendix.

Evaluation

For development purposes, we provide an evaluation script here. The script takes two or three files as input, a gold-standard file (such as the gold standard of the train) and one or two prediction files in the format described in 'Submission Format'.

Track 1 (MLEC):

Official Competition Metric: The evaluation will be based on multi-label accuracy (or Jaccard index).

Secondary Evaluation Metrics: Apart from the official competition metric described above, some additional metrics will also be calculated for your submissions. These are intended to provide a different perspective on the results:

  • Micro F1-score
  • Macro F1-score

Track 2 (MCEC):

Official Competition Metric: The evaluation will be based on Macro F1-score

Secondary Evaluation Metrics: Apart from the official competition metric described above, some additional metrics will also be calculated for your submissions. These are intended to provide a different perspective on the results:

  • Accuracy
  • Macro Precision
  • Macro Recall

Terms and Conditions

By participating in this task you agree to these terms and conditions. If, however, one or more of this conditions is a concern for you, send us an email and we will consider if an exception can be made.

By submitting results to this competition, you consent to the public release of your scores at this website, at the WASSA 2023 website, the Codalab website and in the associated proceedings, at the task organizers' discretion. Scores may include, but are not limited to, automatic and manual quantitative judgements, qualitative judgements, and such other metrics as the task organizers see fit. You accept that the ultimate decision of metric choice and score value is that of the task organizers. You further agree that the task organizers are under no obligation to release scores and that scores may be withheld if it is the task organizers' judgement that the submission was incomplete, erroneous, deceptive, or violated the letter or spirit of the competition's rules. Inclusion of a submission's scores is not an endorsement of a team or individual's submission, system, or science. A participant can be involved in exactly one team (no more). If there are reasons why it makes sense for you to be on more than one team, then email us before the evaluation period begins. In special circumstances this may be allowed.

Each team must create and use exactly one CodaLab account.

Team constitution (members of a team) cannot be changed after the evaluation period has begun. No participant can be part of more than one team.

During the evaluation period:

  • Each team can submit as many as fifty submissions. However, only the final submission will be considered as the official submission to the competition.
  • You will not be able to see results of your submission on the test set.
  • You will be able to see any warnings and errors for each of your submission.
  • Leaderboard is disabled
  • We will make the final submissions of the teams public at some point after the evaluation period.
  • The organizers and their affiliated institutions makes no warranties regarding the datasets provided, including but not limited to being correct or complete. They cannot be held liable for providing access to the datasets or the usage of the datasets.
  • The dataset should only be used for scientific or research purposes. Any other use is explicitly prohibited.
  • The datasets must not be redistributed or shared in part or full with any third party. Redirect interested parties to this website.
  • If you use any of the datasets provided in the shared task, you agree to cite the associated paper. Information will be provided later.

Development

Start: Feb. 28, 2023, midnight

Description: Development phase: create models and submit them or directly submit results on validation and/or test data; feed-back are provided on the validation set only.

Final

Start: April 15, 2023, midnight

Description: Final phase: submissions from the previous phase are automatically cloned and used to compute the final score. The results on the test set will be revealed when the organizers make them available.

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In