DiffHammer: Rethinking the Robustness of Diffusion-Based Adversarial Purification

Published: 25 Sept 2024, Last Modified: 08 Nov 2024NeurIPS 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: adaptive adversarial attack, adversarial purification, diffusion
TL;DR: DiffHammer provides effective and efficient robustness evaluation for diffusion-based purification via selective attack and N-evaluation.
Abstract: Diffusion-based purification has demonstrated impressive robustness as an adversarial defense. However, concerns exist about whether this robustness arises from insufficient evaluation. Our research shows that EOT-based attacks face gradient dilemmas due to global gradient averaging, resulting in ineffective evaluations. Additionally, 1-evaluation underestimates resubmit risks in stochastic defenses. To address these issues, we propose an effective and efficient attack named DiffHammer. This method bypasses the gradient dilemma through selective attacks on vulnerable purifications, incorporating $N$-evaluation into loops and using gradient grafting for comprehensive and efficient evaluations. Our experiments validate that DiffHammer achieves effective results within 10-30 iterations, outperforming other methods. This calls into question the reliability of diffusion-based purification after mitigating the gradient dilemma and scrutinizing its resubmit risk.
Primary Area: Safety in machine learning
Submission Number: 16983
Loading