IMPROVING ADAM OPTIMIZERDownload PDF

12 Feb 2018 (modified: 05 May 2023)ICLR 2018 Workshop SubmissionReaders: Everyone
Abstract: We present a modified version of the Adam (Adaptive moment estimation) opti-mization algorithm, able to improve the speed of convergence and finds a betterminimum for the loss function com-pared to the original algorithm. The proposedsolution borrows some ideas from the momentum based optimizer and the exponen-tial decay technique. The current step size made by Adam to update the parametersis modify in such a way that the new step takes in consideration the directionof the gradient and the previous steps update. We conducted several tests withdeep Convolutional Neural Networks in the MNIST data. The results showed thatAAdam(Accelerated Adam) outperforms Adam and NAdam (Nesterov ac-celeratedAdam). The preliminary evidence suggests that making such a change improvesthe speed of convergence and the quality of the learned models.
Keywords: Neural Networks, Gradient Descent, Deep Learning, Machine Learn-ing, Optimization algorithms
4 Replies

Loading