Exploiting Adam-like Optimization Algorithms to Improve the Performance of Convolutional Neural NetworksDownload PDF

Published: 11 May 2021, Last Modified: 26 Mar 2024MIDL 2021 PosterReaders: Everyone
Keywords: Adam-like Optimization, Deep Networks Training, ensemble
TL;DR: We test, propose and combine several algorithms to train CNNs on medical images.
Abstract: Stochastic gradient descent (SGD) is the main approach for training deep networks: it moves towards the optimum of the cost function by iteratively updating the parameters of a model in the direction of the gradient of the loss evaluated on a minibatch. Several variants of SGD have been proposed to make adaptive step sizes for each parameter (adaptive gradient) and take into account the previous updates (momentum). Among several alternative of SGD the most popular are AdaGrad, AdaDelta, RMSProp and Adam which scale coordinates of the gradient by square roots of some form of averaging of the squared coordinates in the past gradients and automatically adjust the learning rate on a parameter basis. In this work, we compare Adam based variants based on the difference between the present and the past gradients, the step size is adjusted for each parameter. We run several tests benchmarking proposed methods using medical image data. The experiments are performed using ResNet50 architecture neural network. Moreover, we have tested ensemble of networks and the fusion with ResNet50 trained with stochastic gradient descent. To combine the set of ResNet50 the simple sum rule has been applied. Proposed ensemble obtains very high performance, it obtains accuracy comparable or better than actual state of the art. To improve reproducibility and research efficiency the MATLAB source code used for this research is available at GitHub: https://github.com/LorisNanni.
Paper Type: methodological development
Primary Subject Area: Detection and Diagnosis
Secondary Subject Area: Application: Histopathology
Paper Status: original work, not submitted yet
Source Code Url: https://github.com/LorisNanni/Exploiting-Adam-like-Optimization-Algorithms-to-Improve-the-Performance-of-Convolutional-Neural-Netw
Data Set Url: https://ome.grc.nia.nih.gov/iicbu2008/hela/index.html, https://zenodo.org/record/834910#.YFsmIa9KiCo, https://zenodo.org/record/1003200#.YFsnR69KiCp
Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
Source Latex: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2103.14689/code)
3 Replies

Loading