Self-Stabilization: The Implicit Bias of Gradient Descent at the Edge of StabilityDownload PDF

Published: 23 Nov 2022, Last Modified: 05 May 2023OPT 2022 PosterReaders: Everyone
Keywords: gradient descent, optimization, edge of stability, implicit regularization, implicit bias
TL;DR: We explain the mechanism behind the edge of stability phenomenon, where full batch gradient descent non-monotonically decreases the loss in the presence of instability.
Abstract: Traditional analyses of gradient descent with learning rate $\eta$ show that when the largest eigenvalue of the Hessian of the loss, also known as the sharpness $S(\theta)$, is bounded by $2/\eta$, training is "stable" and the training loss decreases monotonically. However, Cohen et al. (2021) recently observed two important phenomena. The first, \emph{progressive sharpening}, is that the sharpness steadily increases throughout training until it reaches the instability cutoff $2/\eta$. The second, \emph{edge of stability}, is that the sharpness hovers at $2/\eta$ for the remainder of training while the loss non-monotonically decreases. We demonstrate that, far from being chaotic, the dynamics of gradient descent at the edge of stability can be captured by a cubic Taylor expansion: as the iterates diverge in direction of the top eigenvector of the Hessian due to instability, the cubic term in the local Taylor expansion of the loss function causes the curvature to decrease until stability is restored. This property, which we call \emph{self-stabilization}, is a general property of gradient descent and explains its behavior at the edge of stability. A key consequence of self-stabilization is that gradient descent at the edge of stability implicitly follows \emph{projected} gradient descent (PGD) under the constraint $S(\theta) \le 2/\eta$. Our analysis provides precise predictions for the loss, sharpness, and deviation from the PGD trajectory throughout training, which we verify both empirically in a number of standard settings and theoretically under mild conditions. Our analysis uncovers the mechanism for gradient descent's implicit bias towards stability.
0 Replies

Loading