Adaptive Learning Rates with Maximum Variation AveragingDownload PDF

28 Sept 2020 (modified: 22 Oct 2023)ICLR 2021 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Adaptive Step Size, Large Batch Optimization, Transformers
Abstract: Adaptive gradient methods such as RMSProp and Adam use exponential moving estimate of the squared gradient to compute coordinate-wise adaptive step sizes, achieving better convergence than SGD in face of ill-conditioned or noisy objectives. However, Adam can have undesirable convergence behavior due to unstable or extreme adaptive learning rates. Methods such as AMSGrad and AdaBound have been proposed to stabilize the adaptive learning rates of Adam in the later stage of training, but they do not outperform Adam in some practical tasks such as training Transformers. In this paper, we propose an adaptive learning rate principle, in which the running mean of squared gradient is replaced by a weighted mean, with weights chosen to maximize the estimated variance of each coordinate. This gives a worst-case estimate for the local gradient variance, taking smaller steps when large curvatures or noisy gradients are present, which leads to more desirable convergence behavior than Adam. We prove the proposed algorithm converges under mild assumptions for nonconvex stochastic optimization problems, and demonstrate the improved efficacy of our adaptive averaging approach on image classification, machine translation and natural language understanding tasks. Moreover, our method overcomes the non-convergence issue of Adam in BERT pretraining at large batch sizes, while achieving better test performance than \lamb~in the same setting.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: We propose an adaptive approach to compute the adaptive step size of first-order gradient methods that improves convergence of Adam in the large-batch setting.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 4 code implementations](https://www.catalyzex.com/paper/arxiv:2006.11918/code)
Reviewed Version (pdf): https://openreview.net/references/pdf?id=xpRx-gYXLN
10 Replies

Loading