Keywords: Certified defense, Data poisoning, Diffusion denoising
TL;DR: We show that our certified defense against data poisoning that leverages the diffusion denoising approach renders existing clean-label poisoning ineffective while preserving a model utility.
Abstract: We present a certified defense to clean-label poisoning attacks. These attacks work by injecting poisoning samples that contain $p$-norm bounded adversarial perturbations into the training data to induce a targeted misclassification of a test-time input. Inspired by the adversarial robustness achieved by $denoised$ $smoothing$, we show how a pre-trained diffusion model can sanitize the training data before a model training. We extensively test our defense against seven clean-label poisoning attacks and reduce their attack success to 0-16\% with only a small drop in the test time accuracy. We compare our defense with existing countermeasures against clean-label poisoning, showing that the defense reduces the attack success the most and offers the best model utility. Our results highlight the need for future work on developing stronger clean-label attacks and using our certified yet practical defense as a strong baseline to evaluate these attacks.
Supplementary Material: pdf
Primary Area: societal considerations including fairness, safety, privacy
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 491
Loading