Speaker Attribution 2023 Competition Forum

Go back to competition Back to thread list Post in this thread

> Output format, metrics and test data

Dear Organizers,

we would like to ask you the following:

1. Under "Learn the Details" -> "Evaluation" you state: "The evaluation metrics will soon be added here and can be tested using the sample submission files in the same directory (coming soon)." There you link to an "eval" directory in the umanlp/SpkAtt-2023 repository, which does not exist. Currently, we do not know what the output format should look like and how you calculate the metrics. Could you please provide the test files and metrics?

2. Could you provide the test data for subtask 1? Test data was pushed to the umanlp/SpkAtt-2023 repository in this commit: https://github.com/umanlp/SpkAtt-2023/commit/fe1cfbe405ea4c58c265ccd0bf1bb8b6dbb264db and deleted in the next commit. Is this the correct test data for subtask 1?

Best regards,
Team CPAa

Posted by: nesasio93 @ July 11, 2023, 11:21 a.m.

Dear Team CPAa,

my sincere apologies for accidentally removing the test data for task1 from the git. I had uploaded it on time and thought it was there.
Thank you for pointing this out!

You can get the metrics for your output by uploading your predictions to CodaLab. The output should have the same format as the input files.
Please upload your output files as a zip archive (no folders, just a zip of all output files):

Participate -> Submit / View results -> select the task for which you want to submit (either yellow or blue button).

You can then download the evaluation results for your output. (view scoring output log).

We will upload the test data for Task 1, subtask 2 (role labelling only) after the submission deadline for subtask 1 (i.e., after July 31).

Best,
Ines

Posted by: SpkAtt2023 @ July 12, 2023, 6:56 a.m.

Dear Ines,

I wanted to test how the metric scores empty answers and uploaded the test dataset as a submission without any changes. As a result, I received the following error:

WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
Traceback (most recent call last):
File "/tmp/codalab/tmpw42kol/run/program/score.py", line 262, in
x.eval(args.input_dir, args.output_dir)
File "/tmp/codalab/tmpw42kol/run/program/score.py", line 159, in eval
e = st(system_ann, gold_ann, doc2sentid2string_map)
File "/tmp/codalab/tmpw42kol/run/program/classes.py", line 1180, in __init__
self.add_eval(EvaluateSubtrack1(annotator_cas, gold_cas, doc2sentid2string_map, **kwargs),
File "/tmp/codalab/tmpw42kol/run/program/classes.py", line 837, in __init__
logger.debug("g_rollen "+str(g_rollen)+" s_rollen "+str(s_rollen))
UnboundLocalError: local variable 's_rollen' referenced before assignment

Could you have a look at this error?

Best regards,
Team CPAa

Posted by: nesasio93 @ July 12, 2023, 7:48 a.m.

Dear Team CPAa,

fixed and tested (we didn't consider cases where the annotations are completely empty).

If you now upload the (empty) test files, you should get a size 0 with no score. If you add annotations to one file, your output will get scores.

Also, please note that in order to see the output, you have to select:
"Download output from scoring step"
and not "view scoring output log" as I said in my last post -> this is just an empty log file.

Please let us know if there are any more problems.

Best,
Ines

Posted by: SpkAtt2023 @ July 12, 2023, 9:25 a.m.
Post in this thread