Abstract: The training of over-parameterized neural networks has received much study in recent literature. An important consideration is the regularization of over-parameterized networks due to their highly nonconvex and nonlinear geometry. In this paper, we study noise injection algorithms, which can regularize the Hessian of the loss, leading to regions with flat loss surfaces. Specifically, by injecting isotropic Gaussian noise into the weight matrices of a neural network, we can obtain an approximately unbiased estimate of the trace of the Hessian. However, naively implementing the noise injection via adding noise to the weight matrices before backpropagation presents limited empirical improvements. To address this limitation, we design a two-point estimate of the Hessian penalty, which injects noise into the weight matrices along both positive and negative directions of the random noise. In particular, this two-point estimate eliminates the variance of the first-order Taylor's expansion term on the Hessian. We show a PAC-Bayes generalization bound that depends on the trace of the Hessian (and the radius of the weight space), which can be measured from data.
We conduct a detailed experimental study to validate our approach and show that it can effectively regularize the Hessian and improve generalization. First, our algorithm can outperform prior approaches on sharpness-reduced training, delivering up to a 2.4% test accuracy increase for fine-tuning ResNets on six image classification datasets. Moreover, the trace of the Hessian reduces by 15.8%, and the largest eigenvalue is reduced by 9.7% with our approach. We also find that the regularization of the Hessian can be combined with alternative regularization methods, such as weight decay and data augmentation, leading to stronger regularization. Second, our approach remains highly effective for improving generalization in pretraining multimodal CLIP models and chain-of-thought fine-tuning.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: In the final version, we made various edits in response to AE's comments, including:
- Adding a discussion about the limitation of the theory in Section 7.
- Adding a paragraph about the intuition of NSO in Section 3.1.2.
- Clarifying that Section 5 uses full gradient, adding details on selecting learning rate and three figures using a different learning rate.
- Renaming Theorem 4.2 to a Proposition and clarifying the assumption about the Lipschitz continuity of $\nabla f$.
- Adding references to the line of work on randomized smoothing.
- Clarifying Theorem 2.1's statement about the quantifier on "for all $W$."
- Adding Remark 2.2 & Remark 4.5 to discuss the novelty of Sections 2 & 4.
- Fixing any remaining typos in the paper.
Code: https://github.com/VirtuosoResearch/Noise-stability-optimization
Supplementary Material: zip
Assigned Action Editor: ~Yair_Carmon1
Submission Number: 2652
Loading