Just How Toxic is Data Poisoning? A Benchmark for Backdoor and Data Poisoning AttacksDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Blind SubmissionReaders: Everyone
Keywords: Poisoning, backdoor, attack, benchmark
Abstract: Data poisoning and backdoor attacks manipulate training data in order to cause models to fail during inference. A recent survey of industry practitioners found that data poisoning is the number one concern among threats ranging from model stealing to adversarial attacks. However, we find that the impressive performance evaluations from data poisoning attacks are, in large part, artifacts of inconsistent experimental design. Moreover, we find that existing poisoning methods have been tested in contrived scenarios, and many fail in more realistic settings. In order to promote fair comparison in future work, we develop standardized benchmarks for data poisoning and backdoor attacks.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: A novel benchmark for data poisoning and backdoor attacks offers fair comparison of attacks, filling a major gap in the literature to date.
Supplementary Material: zip
Reviewed Version (pdf): https://openreview.net/references/pdf?id=perk6kTEBM
10 Replies

Loading