Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural NetworksDownload PDF

28 Sept 2020, 15:52 (modified: 05 Mar 2021, 21:08)ICLR 2021 Conference Withdrawn SubmissionReaders: Everyone
Reviewed Version (pdf):
Keywords: Backdoor attack, Machine learning security
Abstract: Backdoor attack against deep neural networks is currently being profoundly investigated due to its severe security consequences. Current state-of-the-art backdoor attacks require the adversary to modify the input, usually by adding a trigger to it, for the target model to activate the backdoor. This added trigger not only increases the difficulty of launching the backdoor attack in the physical world, but also can be easily detected by multiple defense mechanisms. In this paper, we present the first triggerless backdoor attack against deep neural networks, where the adversary does not need to modify the input for triggering the backdoor. Our attack is based on the dropout technique. Concretely, we associate a set of target neurons that are dropped out during model training with the target label. In the prediction phase, the model will output the target label when the target neurons are dropped again, i.e., the backdoor attack is launched. This triggerless feature of our attack makes it practical in the physical world. Extensive experiments show that our triggerless backdoor attack achieves a perfect attack success rate with a negligible damage to the model's utility.
One-sentence Summary: We propose a backdoor attack without the need to modify the input, i.e., a triggerless backdoor attack.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
5 Replies