A Simple Convergence Proof of Adam and Adagrad

Published: 28 Oct 2022, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: We provide a simple proof of convergence covering both the Adam and Adagrad adaptive optimization algorithms when applied to smooth (possibly non-convex) objective functions with bounded gradients. We show that in expectation, the squared norm of the objective gradient averaged over the trajectory has an upper-bound which is explicit in the constants of the problem, parameters of the optimizer, the dimension $d$, and the total number of iterations $N$. This bound can be made arbitrarily small, and with the right hyper-parameters, Adam can be shown to converge with the same rate of convergence $O(d\ln(N)/\sqrt{N})$. When used with the default parameters, Adam doesn't converge, however, and just like constant step-size SGD, it moves away from the initialization point faster than Adagrad, which might explain its practical success. Finally, we obtain the tightest dependency on the heavy ball momentum decay rate $\beta_1$ among all previous convergence bounds for non-convex Adam and Adagrad, improving from $O((1-\beta_1)^{-3})$ to $O((1-\beta_1)^{-1})$.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We fixed a number of typos and added some content as requested by the reviewers. In particular, we clarify the derivation of the step size, the updated the related work, better justified the analysis in 5.3. We added in the appendix a sketch of proof with Holder inequality, added more recent work on SGD. We also added one experiment to show the impact of removing the corrective term on the momentum (vs. removing the one on the denominator). Updated section 2.2 to account for reviewer feedback. Camera ready style, I've added back the supplementary material to the main paper.
Assigned Action Editor: ~Naman_Agarwal1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 293
Loading