How to control the learning rate of adaptive sampling schemesDownload PDF

Lorenz Berger, Eoin Hyde, Nevil Pavithran, Faiz Mumtaz, Felix Bragman, M. Jorge Cardoso, Sebastien Ourselin

08 Apr 2018 (modified: 05 May 2023)Submitted to MIDL 2018Readers: Everyone
Abstract: Deep convolutional neural networks have shown excellent performance in object recognition tasks and dense classification problems such as semantic segmentation. However training deep neural networks is still challenging and can require large amounts of computational resources to find network hyperparameters that result in good generalization properties. This procedure can be further complicated when an adaptive/boosted sampling scheme is used which varies the amount of information in mini-batches throughout training. In this work we address the task of tuning the learning rate schedule for Stochastic Gradient Descent (SGD) whilst employing an adaptive sampling procedure. We review recent theory of SGD training dynamics to help interpret our experimental findings, give a detailed description of the proposed algorithm for optimizing the SGD learning rate schedule and show that our method generalizes well and is able to attain state-of-art results on the VISCERAL Anatomy benchmark.
Keywords: Image segmentation, Convolution Neural Networks, Stochastic Gradient Descent, Adaptive Sampling, Hyper parameter tuning
Author Affiliation: Innersight Labs, University College London, Royal Free Hospital
4 Replies

Loading