Towards Bridging the gap between Empirical and Certified Robustness against Adversarial Examples

TMLR Paper303 Authors

27 Jul 2022 (modified: 28 Feb 2023)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: The current state-of-the-art defense methods against adversarial examples typically focus on improving either empirical or certified robustness. Among them, adversarially trained (AT) models produce empirical state-of-the-art defense against adversarial examples without providing any robustness guarantees for large classifiers or higher-dimensional inputs. In contrast, existing randomized smoothing based models achieve state-of-the-art certified robustness while significantly degrading the empirical robustness against adversarial examples. In this paper, we propose a novel method, called \emph{Certification through Adaptation}, that transforms an AT model into a randomized smoothing classifier during inference to provide certified robustness for $\ell_2$ norm without affecting their empirical robustness against adversarial attacks. We also propose \emph{Auto-Noise} technique that efficiently approximates the appropriate noise levels to flexibly certify the test examples using randomized smoothing technique. Our proposed \emph{Certification through Adaptation} with \emph{Auto-Noise} technique achieves an \textit{average certified radius (ACR) scores} up to $1.102$ and $1.148$ respectively for CIFAR-10 and ImageNet datasets using AT models without affecting their empirical robustness or benign accuracy. Hence, our work is a step towards bridging the gap between the empirical and certified robustness by providing certification using AT models while maintaining the empirical robustness against adversarial examples in the proposed framework.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Rephrasing a few sentences/ sections (highlighted in Blue).
Assigned Action Editor: ~Raman_Arora1
Submission Number: 303
Loading