Gradient Descent with Polyak’s Momentum Finds Flatter Minima via Large Catapults

Published: 16 Jun 2024, Last Modified: 17 Jul 2024HiLD at ICML 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: catapult, gradient descent, polyak momentum, edge of stability
TL;DR: We show that Polyak's heavy-ball momentum with large learning rate and linear learning rate warmup induce large catapults, resulting in a much larger sharpness reduction than that of GD.
Abstract: Although gradient descent with Polyak's momentum is widely used in modern machine and deep learning, a concrete understanding of its effects on the training trajectory remains elusive. In this work, we empirically show that for linear diagonal networks and nonlinear neural networks, momentum gradient descent with a large learning rate displays large catapults, driving the iterates towards much flatter minima than those found by gradient descent. We hypothesize that the large catapult is caused by momentum "prolonging" the self-stabilization effect (Damian et al., 2023). We provide theoretical and empirical support for our hypothesis in a simple toy example and empirical evidence supporting our hypothesis for linear diagonal networks.
Student Paper: Yes
Submission Number: 24
Loading