Dynamics of Stochastic Momentum Methods on Large-scale, Quadratic ModelsDownload PDF

Published: 09 Nov 2021, Last Modified: 25 Nov 2024NeurIPS 2021 PosterReaders: Everyone
Keywords: optimization, convex, random matrix theory, complexity, average-case
Abstract: We analyze a class of stochastic gradient algorithms with momentum on a high-dimensional random least squares problem. Our framework, inspired by random matrix theory, provides an exact (deterministic) characterization for the sequence of function values produced by these algorithms which is expressed only in terms of the eigenvalues of the Hessian. This leads to simple expressions for nearly-optimal hyperparameters, a description of the limiting neighborhood, and average-case complexity. As a consequence, we show that (small-batch) stochastic heavy-ball momentum with a fixed momentum parameter provides no actual performance improvement over SGD when step sizes are adjusted correctly. For contrast, in the non-strongly convex setting, it is possible to get a large improvement over SGD using momentum. By introducing hyperparameters that depend on the number of samples, we propose a new algorithm sDANA (stochastic dimension adjusted Nesterov acceleration) which obtains an asymptotically optimal average-case complexity while remaining linearly convergent in the strongly convex setting without adjusting parameters.
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
TL;DR: We run momentum methods on large random least squares problems and show momentum should have dimension dependence.
Supplementary Material: pdf
Code: https://github.com/elliotpaquette/elliotpaquette/blob/master/sDANA_simulations.ipynb
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/dynamics-of-stochastic-momentum-methods-on/code)
13 Replies

Loading