A list of sentences from mass-media news texts is given (example). Each sentence is annotated by:
Each entity has three-scaled label. The following classes (labels) are used:
Participants are required to automatically annotate each test sentence by sentiment label for a given entity.
Example of the result submission is available here.
The main performance metric is the macro F1-score (macro F1pn-score), which is averaged over two sentiment classes. The class “neutral” is excluded because it is more interesting and important to extract opinions and sentiments. More precisely, the following procedure is used:
As a result, the macro F1pn-score will be calculated. The macro F1 for three-class classification will be considered auxiliary.
Participants who presented their solution at the evaluation can submit a paper for publication, which undergoes double-blind peer review on an equal basis with other participants of the Dialogue conference (for more details see https://www.dialog-21.ru/information2023/).
We present a finetuned BERT classifier as our baseline.
Contacts
louk_nat@mail.ru
Download | Size (mb) | Phase |
---|---|---|
Public Data | 0.620 | #1 Development |
Natalia Loukachevitch (Moscow State University)
Anton Golubev (Moscow State University)
Start: Dec. 26, 2022, midnight
Description: !!! Attention !!! You should submit a zip archive which contains a *.csv file.
Start: March 4, 2023, 11:59 p.m.
Description: !!! Attention !!! You should submit a zip archive which contains a *.csv file.
Start: March 13, 2023, 11:59 a.m.
Description: !!! Attention !!! You should submit a zip archive which contains a *.csv file.
Never
You must be logged in to participate in competitions.
Sign In# | Username | Score |
---|---|---|
1 | nicolayr_ | 68.13 |
2 | LevBorisovskiy | 64.69 |
3 | MursalimovDanil | 64.31 |