IMPROVING ADAM OPTIMIZER

Ange Tato, Roger Nkambou

Feb 12, 2018 (modified: Jun 04, 2018) ICLR 2018 Workshop Submission readers: everyone Show Bibtex
  • Abstract: We present a modified version of the Adam (Adaptive moment estimation) opti-mization algorithm, able to improve the speed of convergence and finds a betterminimum for the loss function com-pared to the original algorithm. The proposedsolution borrows some ideas from the momentum based optimizer and the exponen-tial decay technique. The current step size made by Adam to update the parametersis modify in such a way that the new step takes in consideration the directionof the gradient and the previous steps update. We conducted several tests withdeep Convolutional Neural Networks in the MNIST data. The results showed thatAAdam(Accelerated Adam) outperforms Adam and NAdam (Nesterov ac-celeratedAdam). The preliminary evidence suggests that making such a change improvesthe speed of convergence and the quality of the learned models.
  • Keywords: Neural Networks, Gradient Descent, Deep Learning, Machine Learn-ing, Optimization algorithms
0 Replies

Loading