Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
IMPROVING ADAM OPTIMIZER
Ange Tato, Roger Nkambou
Feb 12, 2018 (modified: Jun 04, 2018)ICLR 2018 Workshop Submissionreaders: everyoneShow Bibtex
Abstract:We present a modified version of the Adam (Adaptive moment estimation) opti-mization algorithm, able to improve the speed of convergence and finds a betterminimum for the loss function com-pared to the original algorithm. The proposedsolution borrows some ideas from the momentum based optimizer and the exponen-tial decay technique. The current step size made by Adam to update the parametersis modify in such a way that the new step takes in consideration the directionof the gradient and the previous steps update. We conducted several tests withdeep Convolutional Neural Networks in the MNIST data. The results showed thatAAdam(Accelerated Adam) outperforms Adam and NAdam (Nesterov ac-celeratedAdam). The preliminary evidence suggests that making such a change improvesthe speed of convergence and the quality of the learned models.
Keywords:Neural Networks, Gradient Descent, Deep Learning, Machine Learn-ing, Optimization algorithms
Enter your feedback below and we'll get back to you as soon as possible.