- TL;DR: We improve existing certification results with a new certification framework by reformulating the original problem to a functional optimization one, and design a new distribution family which suits this task better through this framework.
- Abstract: Randomized classifiers have been shown to provide a promising approach for achieving certified robustness against adversarial attacks in deep learning. However, most existing methods only leverage Gaussian smoothing noise and only work for $\ell_2$ perturbation. We propose a general framework of adversarial certification with non-Gaussian noise and for more general types of attacks, from a unified functional optimization perspective. Our new framework allows us to identify a key trade-off between accuracy and robustness via designing smoothing distributions, helping to design two new families of non-Gaussian smoothing distributions that work more efficiently for $\ell_2$ and $\ell_\infty$ attacks, respectively. Our proposed methods achieve better results than previous works and provide a new perspective on randomized smoothing certification.
- Keywords: Adversarial Certification, Randomized Smoothing, Functional Optimization
- Original Pdf: pdf