Though it has been pretty clear, I need to speak out disappointment about this competition for the sake of my mental health.
1. The timeline is not clear and changed randomly for multiple times.
In the very beginning, the competition started suddenly when the workshop page still put the challenge timeline as 'pending'. Then in the first stage, we found the second stage was started a week before the previous written "March 4". The confusion on the meaning of "midnight" added to the chaos.
2. The rules are not detailed enough for participants to conduct aligned and fair competition.
The training restrictions(e.g. dataset, pretrained model usage etc.), evaluation metrics (e.g. it is put "The AUC value is used as the evaluation criterion ..." on the evaluation page, while the terms and condition page puts "...entries received will be judged using the ACER criteria...") and some other rules are seemingly not well stated and notified to establish a fair competition.
The topic is interesting, and certainly we appreciate the effort to build the dataset and organize the challenge. However, it would be lying if I said I enjoyed the competition.