> Official Results

Hi all,

Thank you for participating to our shared task. We finetuned BERT-multilingual (bert-base-multilingual-cased ) model for two tasks and used as baseline scores.
We also added test set with labels. You can use it for post evaluation experiments.
If you have questions, please do not hesitate to ask.

The official results:

Track1: MLEC
Team Name Multi-label Accuracy Micro F1 Macro F1
YNU-HPCC 0,9782 0,9854 0,9869
CTcloud 0,9723 0,9815 0,9833
wsl&zt 0,911 0,9407 0,9464
baseline 0,7321 0,8514 0,8347

Track2: MCEC
Team Name Macro F1 Accuracy Macro Precision Macro Recall
YNU-HPCC 0,9329 0,9488 0,9488 0,9488
CTcloud 0,8917 0,9219 0,9219 0,9219
wsl&zt 0,7359 0,7699 0,7699 0,7699
anedilko 0,7038 0,7313 0,7313 0,7313
baseline 0,7014 0,7298 0,7298 0,7298
Arenborg 0,6772 0,7254 0,7254 0,7254
PRECOG 0,6061 0,6734 0,6734 0,6734
BpHigh 0,3764 0,5642 0,5642 0,5642

Posted by: wassa23codemixed @ April 27, 2023, 2:02 a.m.
Post in this thread