SGD Can Converge to Local MaximaDownload PDF

29 Sept 2021, 00:30 (modified: 14 Mar 2022, 02:46)ICLR 2022 SpotlightReaders: Everyone
Keywords: stochastic gradient descent, saddle points, convergence, amsgrad, deep learning
Abstract: Previous works on stochastic gradient descent (SGD) often focus on its success. In this work, we construct worst-case optimization problems illustrating that, when not in the regimes that the previous works often assume, SGD can exhibit many strange and potentially undesirable behaviors. Specifically, we construct landscapes and data distributions such that (1) SGD converges to local maxima, (2) SGD escapes saddle points arbitrarily slowly, (3) SGD prefers sharp minima over flat ones, and (4) AMSGrad converges to local maxima. We also realize results in a minimal neural network-like example. Our results highlight the importance of simultaneously analyzing the minibatch sampling, discrete-time updates rules, and realistic landscapes to understand the role of SGD in deep learning.
One-sentence Summary: We show that it can be common for SGD to converge to saddle points and maxima.
27 Replies

Loading