Smoothed Robustness Analysis: Bridging worst- and average-case robustness analyses via smoothed analysis

Published: 11 Jun 2024, Last Modified: 11 Jun 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: The sensitivity to adversarial attacks and noise is a significant drawback of neural networks, and understanding and certifying their robustness has attracted much attention. Studies have attempted to bridge two extreme analyses of robustness; one is the worst-case analysis, which often gives too pessimistic certification, and the other is the average-case analysis, which often fails to give a tight guarantee of robustness. Among them, \textit{Randomized Smoothing} became prominent by certifying a worst-case region of a classifier under input noise. However, the method still suffers from several limitations, probably due to the lack of a larger underlying framework to locate it. Here, inspired by the \textit{Smoothed Analysis} of algorithmic complexity, which bridges the worst-case and average-case analyses of algorithms, we provide a theoretical framework for robustness analyses of classifiers, which contains \textit{Randomized Smoothing} as a special case. Using the framework, we also propose a novel robustness analysis that works even in the small noise regime and thus provides a more confident robustness certification than \textit{Randomized Smoothing}. To validate the approach, we evaluate the robustness of fully connected and convolutional neural networks on the MNIST and CIFAR-10 datasets, respectively, and find that it indeed improves both adversarial and noise robustness.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: Fixed Appendix A.5 subsection numbering to make it consistent Fixed images not showing
Code: https://github.com/ThomRC/sra
Assigned Action Editor: ~Krishnamurthy_Dvijotham2
Submission Number: 1654
Loading