Hello,
The Primary Round of the competition has concluded. Thank you to all participants for putting in your time and effort. Many teams were able to improve over state-of-the-art baselines, and we are excited to distill the findings from the winning teams and share what we collectively learned.
At this time, all of the winning teams have been notified via email. We are in the process of verifying their submissions to finalize the rankings, and the test leaderboards will be made public after this process concludes (hopefully in a few weeks).
All the best,
Mantas (TDC co-organizer)
Hi,
I've mentioned this in a separate thread but I've got no reply. What are the "state-of-the-art baselines" you have mentioned above? can we please get some info on validation scores for other baselines that are not MNTD. Should we consider MNTD SOTA for this task?
Many thanks.
A.
Posted by: amsqr @ Oct. 28, 2022, 7:13 a.m.Hello,
We plan on adding Neural Cleanse and ABS to the validation and test leaderboards (two other strong baseline detectors from the existing literature on trojan detection). Since Neural Cleanse was originally implemented in TensorFlow, we made our own implementation in PyTorch. Since the PyTorch implementation of ABS only works for certain architectures, we adapted it to work for the TDC networks. We plan on contacting the original authors of the Neural Cleanse and ABS papers to verify that our implementations are correct and ask for permission to display them on the public leaderboards, but we have not done so yet. This is the main reason why we have not added them to the leaderboards yet. In preliminary experiments, they typically obtain within 10% AUROC of MNTD, so we consider them to be state-of-the-art baselines.
All the best,
Mantas (TDC co-organizer)