AdaBelief Optimizer: Adapting Stepsizes by theBelief in Observed GradientsDownload PDF

Published: 07 Nov 2020, Last Modified: 05 May 2023NeurIPSW 2020: DL-IG PosterReaders: Everyone
Keywords: Deep Learning, Optimization
TL;DR: An optimizer that achieves fast training, good generalization and stability.
Abstract: Optimization is at the core of modern deep learning. We propose AdaBelief optimizer to simultaneously achieve three goals: fast convergence as in adaptive methods, good generalization as in SGD, and training stability. The intuition for AdaBelief is to adapt the stepsize according to the "belief" in the current gradient direction. Viewing the exponential moving average (EMA) of the noisy gradient as the prediction of the gradient at the next time step, if the observed gradient greatly deviates from the prediction, we distrust the current observation and take a small step; if the observed gradient is close to the prediction, we trust it and take a large step. We validate AdaBelief in extensive experiments, showing that it outperforms other methods with fast convergence and high accuracy on image classification and language modeling. Specifically, on ImageNet, AdaBelief achieves comparable accuracy to SGD. Furthermore, in the training of a GAN on Cifar10, AdaBelief demonstrates high stability and improves the quality of generated samples compared to a well-tuned Adam optimizer.
3 Replies

Loading