On the Convergence of Step Decay Step-Size for Stochastic OptimizationDownload PDF

21 May 2021, 20:45 (edited 23 Oct 2021)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: Stochastic gradient descent, step-decay step-size, non-asymptotic convergence, machine learning
  • TL;DR: We provide the convergence results for SGD with step decay step-size in the non-convex, convex, and strongly convex cases
  • Abstract: The convergence of stochastic gradient descent is highly dependent on the step-size, especially on non-convex problems such as neural network training. Step decay step-size schedules (constant and then cut) are widely used in practice because of their excellent convergence and generalization qualities, but their theoretical properties are not yet well understood. We provide convergence results for step decay in the non-convex regime, ensuring that the gradient norm vanishes at an $\mathcal{O}(\ln T/\sqrt{T})$ rate. We also provide near-optimal (and sometimes provably tight) convergence guarantees for general, possibly non-smooth, convex and strongly convex problems. The practical efficiency of the step decay step-size is demonstrated in several large-scale deep neural network training tasks.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
13 Replies

Loading