- Where do we download data and submit results? See the Getting Started page.
- How many submissions can each team enter per competition track? In each track, teams are restricted to 5 submissions per day on the validation set. For test sets, teams are restricted to 5 submissions total. Only one account per team can be used to submit results. Creating multiple accounts to circumvent the submission limits will result in disqualification.
- Are participants required to share the details of their method? We encourage all participants to share their methods and code, either with the organizers or publicly. To be eligible for prizes, winning teams are required to share their methods, code, and models with the organizers.
- What are the current rules? Here.
- Can the organizers change the rules? Yes. We require participants to consent to a change of rules if there is an urgent need. This is a new area and unanticipated developments may make it necessary for us to change the rules.
- Can the first-place team in the Evasive Trojans Track also participate in the final round? To avoid a possible unfair advantage, the first place team of the Evasive Trojans Track, whose models are used in the final round, will not be eligible for prizes in the final round. However, they may still participate in the leaderboard.
- Is there a restriction on the number of clean examples that can be used by detection methods? No. Submissions can use the entirety of the data sources (e.g., MNIST, CIFAR-10, etc.) for their detection methods. We do not view restrictions on the number of clean examples as highly relevant or realistic; but this is a new area for competitions, so if this is highly relevant we will find out.
- Can I combine the datasets for the different tracks, e.g., to train a multitask method? Yes.
- What are the details for the Trojan Detection Track? Here.
- What are the details for the Trojan Analysis Track? Here.
- What are the details for the Evasive Trojans Track? Here.
- How do I contact the organizers? Please feel free to contact us at tdc-organizers@googlegroups.com.
- Why are you using the baselines you have chosen? Our baseline detectors (MNTD, Neural Cleanse, ABS) are well-known Trojan detectors from the academic literature, each with a distinct approach to Trojan detection. We also use a specificity-based detector as a baseline, since we find that Trojan attacks with low specificity can be highly susceptible to such a detector.
- Why are you using the attack types you have chosen? We use patch and whole-image attacks based on the well-known BadNets [1] and blended [2] attack strategies. To form the basis of our challenge, we modify these attacks to be harder to detect while still maintaining high attack success rates.
- Why are you using the architectures you have chosen? We use shallow ConvNets, Wide Residual Networks, and SimpleViT networks to cover a range of neural architectures.
- What is the competition workshop? Each NeurIPS 2022 competition has several hours allotted for a workshop specific to the competition. We will use this time to announce the winning teams for each track and describe the winning methods, takeaways, etc. For information on the upcoming competition workshop, see here.
- What is the publication summarizing the results? After the conclusion of the competition, the winning teams will be invited to co-author a publication describing the competition, the winning methods, takeaways, etc. This publication will be in the NeurIPS 2022 proceedings.
- When will prizes be distributed? The winning teams will be announced in November 2022, after the organizers verify that the shared code and models of top submissions are legitimate. Prize money will be distributed as soon as possible after the winning teams are announced.
1: "BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain". Gu et al.
2: "Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning". Chen et al.