Enhancing Sharpness-Aware Optimization Through Variance Suppression

Published: 21 Sept 2023, Last Modified: 31 Dec 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: generalization, optimization, neural networks
TL;DR: We propose variance suppression (VaSSO), a new sharpness-aware minimization approach, for improving the generalizability of neural networks.
Abstract: Sharpness-aware minimization (SAM) has well documented merits in enhancing generalization of deep neural networks, even without sizable data augmentation. Embracing the geometry of the loss function, where neighborhoods of 'flat minima' heighten generalization ability, SAM seeks 'flat valleys' by minimizing the maximum loss caused by an *adversary* perturbing parameters within the neighborhood. Although critical to account for sharpness of the loss function, such an '*over-friendly* adversary' can curtail the outmost level of generalization. The novel approach of this contribution fosters stabilization of adversaries through *variance suppression* (VaSSO) to avoid such friendliness. VaSSO's *provable* stability safeguards its numerical improvement over SAM in model-agnostic tasks, including image classification and machine translation. In addition, experiments confirm that VaSSO endows SAM with robustness against high levels of label noise. Code is available at https://github.com/BingcongLi/VaSSO.
Supplementary Material: pdf
Submission Number: 929
Loading