Improve Certified Training with Signal-to-Noise Ratio Loss to Decrease Neuron Variance and Increase Neuron Stability

Published: 27 May 2024, Last Modified: 27 May 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Neural network robustness is a major concern in safety-critical applications. Certified robustness provides a reliable lower bound on worst-case robustness, and certified training methods have been developed to enhance it. However, certified training methods often suffer from over-regularization, leading to lower certified robustness. This work addresses this issue by introducing the concepts of neuron variance and neuron stability, examining their impact on over-regularization and model robustness. To tackle the problem, we extend the Signal-to-Noise Ratio (SNR) into the realm of model robustness, offering a novel perspective and developing SNR-inspired losses aimed at optimizing neuron variance and stability to mitigate over-regularization. Through both empirical and theoretical analysis, our SNR-based approach demonstrates superior performance over existing methods on the MNIST and CIFAR-10 datasets. In addition, our exploration of adversarial training uncovers a beneficial correlation between neuron variance and adversarial robustness, leading to an optimized balance between standard and robust accuracy that outperforms baseline methods.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Revised abstract, added additional results, fixed some typos, and added clarification based on reviewers' questions and suggestions.
Supplementary Material: zip
Assigned Action Editor: ~Gang_Niu1
Submission Number: 2276
Loading