Named Entity Oriented Sentiment Analysis Task (RuSentNE-2023)

Organized by nicolay-r - Current server time: March 30, 2025, 1:06 a.m. UTC

Previous

Final test
March 4, 2023, 11:59 p.m. UTC

Current

Post-evaluation
March 13, 2023, 11:59 a.m. UTC

End

Competition Ends
Never

Welcome!

We invite you to participate in the Dialogue 2023 shared task on Targeted Sentiment Analysis for the Russian Language — RuSentNE. It is the first competition for targeted sentiment analysis towards named entities in Russian news texts. Named entities  should be classified to three sentiment classes: positive, negative or neutral within a single sentence.
The specific features of news texts are as follows:
  • news texts contain numerous named entities with neutral sentiment, which means that the neutral class  largely dominates;
  • on the other hand,  some sentences are full of named entities with different sentiments, which makes it difficult to determine sentiment for a specific named entity.

Task setting

list of sentences from mass-media news texts is given (example). Each sentence is annotated by:

  • entity — object of sentiment analysis
  • entity_tag — tag of this object (PERSON, ORGANIZATION, PROFESSION, COUNTRY, NATIONALITY)
  • entity_pos_start_rel index of the initial character of the given entity
  • entity_pos_end_rel index of the next character after the last of the given entity
  • label — sentiment label

Each entity has three-scaled label. The following classes (labels) are used:

  • Negative (-1)
  • Neutral (0)
  • Positive (1)

Participants are required to automatically annotate each test sentence by sentiment label for a given entity.

Example of the result submission is available here.

Evaluation Criteria

The main performance metric is the macro F1-score (macro F1pn-score), which is averaged over two sentiment classes. The class “neutral” is excluded because it is more interesting and important to extract opinions and sentiments. More precisely, the following procedure is used:

  • for each sentiment class, F1-score is calculated separately;
  • F1-scores are averaged over two out of three classes (the “neutral” class is excluded)

As a result, the macro F1pn-score will be calculated. The macro F1 for three-class classification will be considered auxiliary.

Terms and Conditions

Participants who presented their solution at the evaluation can submit a paper for publication, which undergoes double-blind peer review on an equal basis with other participants of the Dialogue conference (for more details see https://www.dialog-21.ru/information2023/).

Baselines

We present a finetuned BERT classifier as our baseline.

Important dates:

  • Publication of the TRAIN data: 26 December, 2022.
  • Publication of the TEST data: 15 February, 2023. 26 Feburary, 2023  23:59 (UTC-0) 04 March 2023
  • Submission of the Results: 28 February, 2023.   23:59 (AoE) 10 March, 2023  23:59 (AoE) 12 March, 2023
  • Results of the competition: 4 March, 2023.  13 March, 2023
  • Submission of the paper: 1st April, 2022.

Contacts

louk_nat@mail.ru

Download Size (mb) Phase
Public Data 0.620 #1 Development

Organizers

  • Natalia Loukachevitch (Moscow State University)

  • Anton Golubev (Moscow State University)

  • Nicolay Rusnachenko (Newcastle University)

Development

Start: Dec. 26, 2022, midnight

Description: !!! Attention !!! You should submit a zip archive which contains a *.csv file.

Final test

Start: March 4, 2023, 11:59 p.m.

Description: !!! Attention !!! You should submit a zip archive which contains a *.csv file.

Post-evaluation

Start: March 13, 2023, 11:59 a.m.

Description: !!! Attention !!! You should submit a zip archive which contains a *.csv file.

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 nicolayr_ 68.13
2 LevBorisovskiy 64.69
3 MursalimovDanil 64.31