On the Convergence of AdaGrad-Norm for Non-Convex Optimization

19 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: AdaGrad-Norm, Last-iterate convergence, Stochastic optimization
TL;DR: A theoretical paper about the convergence of AdaGrad-Norm
Abstract: Adaptive optimizers have achieved significant success in deep learning by dynamically adjusting the learning rate based on iterative gradients. Compared to stochastic gradient descent (SGD), adaptive optimizers converge much faster in various deep-learning tasks. However, as a fundamental adaptive optimizer, the theoretical analysis of AdaGrad-Norm is inadequate, and there are many technical challenges regarding last-iterate convergence and average-iterate convergence rates for general non-convex loss functions. This paper aims to address these limitations and provides a comprehensive analysis of AdaGrad-Norm. We propose novel techniques that avoid the assumption of no saddle points and derive last-iterate convergence for both almost surely and mean-square senses. Furthermore, under milder conditions, we obtain the near-optimal and sub-optimal rates w.r.t averaged iterate in the expected sense and the almost surely sense, respectively. We relax one restrictive assumption of the uniformly bounded stochastic gradient used in existing high-probability convergence analysis. Moreover, the methodologies provided in this paper have the potential to contribute to further research on the convergence properties of other stochastic algorithms.
Supplementary Material: pdf
Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2014
Loading