On a continuous time model of gradient descent dynamics and instability in deep learning

Published: 25 Jan 2023, Last Modified: 18 Sept 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: The recipe behind the success of deep learning has been the combination of neural networks and gradient-based optimization. Understanding the behavior of gradient descent however, and particularly its instability, has lagged behind its empirical success. To add to the theoretical tools available to study gradient descent we propose the principal flow (PF), a continuous time flow that approximates gradient descent dynamics. To our knowledge, the PF is the only continuous flow that captures the divergent and oscillatory behaviors of gradient descent, including escaping local minima and saddle points. Through its dependence on the eigendecomposition of the Hessian the PF sheds light on the recently observed edge of stability phenomena in deep learning. Using our new understanding of instability we propose a learning rate adaptation method which enables us to control the trade-off between training stability and test set evaluation performance.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: An additional proof read with typo fixes (including formulas) and ensuring consistent labelling of plots.
Video: https://www.youtube.com/watch?v=UKHCH8ZdH1Y
Code: https://github.com/deepmind/discretisation_drift
Assigned Action Editor: ~Guido_Montufar1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 351
Loading