Aiming towards the minimizers: fast convergence of SGD for overparametrized problems

Published: 21 Sept 2023, Last Modified: 02 Nov 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Polyak-Lojasiewicz condition, SGD, interpolation, fast convergence
TL;DR: We develop mathematial conditions under which SGD have as fast convergence rate as full batch GD in terms of number of iterations, and show that these condtions are satisfied by wide neural networks
Abstract: Modern machine learning paradigms, such as deep learning, occur in or close to the interpolation regime, wherein the number of model parameters is much larger than the number of data samples. In this work, we propose a regularity condition within the interpolation regime which endows the stochastic gradient method with the same worst-case iteration complexity as the deterministic gradient method, while using only a single sampled gradient (or a minibatch) in each iteration. In contrast, all existing guarantees require the stochastic gradient method to take small steps, thereby resulting in a much slower linear rate of convergence. Finally, we demonstrate that our condition holds when training sufficiently wide feedforward neural networks with a linear output layer.
Submission Number: 5329
Loading