Random Sharpness-Aware MinimizationDownload PDF

Published: 31 Oct 2022, Last Modified: 17 Oct 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: Sharpness-Aware Minimization, Generalization, Adversarial Training
TL;DR: A novel sharpness-based algorithm to improve generalization of neural network
Abstract: Currently, Sharpness-Aware Minimization (SAM) is proposed to seek the parameters that lie in a flat region to improve the generalization when training neural networks. In particular, a minimax optimization objective is defined to find the maximum loss value centered on the weight, out of the purpose of simultaneously minimizing loss value and loss sharpness. For the sake of simplicity, SAM applies one-step gradient ascent to approximate the solution of the inner maximization. However, one-step gradient ascent may not be sufficient and multi-step gradient ascents will cause additional training costs. Based on this observation, we propose a novel random smoothing based SAM (R-SAM) algorithm. To be specific, R-SAM essentially smooths the loss landscape, based on which we are able to apply the one-step gradient ascent on the smoothed weights to improve the approximation of the inner maximization. Further, we evaluate our proposed R-SAM on CIFAR and ImageNet datasets. The experimental results illustrate that R-SAM can consistently improve the performance on ResNet and Vision Transformer (ViT) training.
Supplementary Material: pdf
19 Replies

Loading