Certified Adversarial Robustness via Mixture-of-Gaussians Randomized Smoothing

Published: 29 Sept 2025, Last Modified: 24 Oct 2025NeurIPS 2025 - Reliable ML WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Adversarial Robustness, Certified Defenses, Randomized Smoothing
Abstract: We propose a generalization of randomized smoothing (RS) that uses noise drawn from a mixture of $K$ Gaussians. We prove that, under a mild Lebesgue integrability condition on the base classifier, the proposed method is decomposable into any one of $K!$ equivalent, $K$-step sequential applications of standard RS. We leverage this multitude of decompositions to show that the mixture-of-Gaussians smoothed classifier inherits Lipschitz continuity from the strongest Lipschitz bound amongst its standard RS constituents. Consequently, we prove that the $\ell_2$-certified radius of the proposed method is inherited from the largest certified radius of its constituents; the mixture-of-Gaussians smoothed model is at least as robust as smoothing with each of the Gaussians individually. CIFAR-10 experiments show that the proposed model exhibits comparable clean accuracy (i.e., zero attack radius) and maximum certified radius to those of standard RS using its maximum-variance constituent, while significantly improving certified accuracy at intermediate attack radii.
Submission Number: 33
Loading