Efficient Stochastic Optimization for Attacking Randomness Involved InferenceDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Recent years witnessed a surging interest in test-time defense against adversarial attacks by introducing randomness during model inference. Notable examples include randomized smoothing equipped with probabilistic certified robustness and adversarial purification that leverages score-based generalization models. Specifically, the adversarial purification achieves state-of-the-art adversarial robustness under the strongest existing attack. Perhaps the most important component to developing and validating adversarial robustness is efficient attacks. Stochastic Projected Gradient Descent (S-PGD), which combines Expectation over Transformation (EOT) and PGD attacks, has become a common strategy to attack inference randomness and validate defense strategies. However, it often has severe efficiency issues that make it prohibitive for complete verification. For example, one step of S-PGD requires multiple runs of score-based purification models for each data point. This work revisits the techniques attacking randomness-involved inference and subsumes them into a unified stochastic optimization framework, which enables us to use acceleration and variance reduction techniques to largely improve the convergence and thus reduce the cost of attack. In other words, the proposed work can significantly improve attack performance, given a fixed budget for attacking.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: General Machine Learning (ie none of the above)
5 Replies

Loading