Clean-Label Backdoor AttacksDownload PDF

27 Sept 2018 (modified: 05 May 2023)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Deep neural networks have been recently demonstrated to be vulnerable to backdoor attacks. Specifically, by altering a small set of training examples, an adversary is able to install a backdoor that can be used during inference to fully control the model’s behavior. While the attack is very powerful, it crucially relies on the adversary being able to introduce arbitrary, often clearly mislabeled, inputs to the training set and can thus be detected even by fairly rudimentary data filtering. In this paper, we introduce a new approach to executing backdoor attacks, utilizing adversarial examples and GAN-generated data. The key feature is that the resulting poisoned inputs appear to be consistent with their label and thus seem benign even upon human inspection.
Keywords: data poisoning, backdoor attacks, clean labels, adversarial examples, generative adversarial networks
TL;DR: We show how to successfully perform backdoor attacks without changing training labels.
15 Replies

Loading