Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Online Batch Selection for Faster Training of Neural Networks
Ilya Loshchilov, Frank Hutter
Feb 17, 2016 (modified: Feb 17, 2016)ICLR 2016 workshop submissionreaders: everyone
Abstract:Deep neural networks are commonly trained using stochastic non-convex optimization procedures, which are driven by gradient information estimated on fractions (batches) of the dataset.
While it is commonly accepted that batch size is an important parameter for offline tuning, the benefits of online selection of batches remain poorly understood.
We investigate online batch selection strategies for two state-of-the-art methods of stochastic gradient-based optimization, AdaDelta and Adam. As the loss function to be minimized for the whole dataset is an aggregation of loss functions of individual datapoints, intuitively, datapoints with the greatest loss should be considered (selected in a batch) more frequently. However, the limitations of this intuition and the proper control of the selection pressure over time are open questions. We propose a simple strategy where all datapoints are ranked w.r.t. their latest known loss value and the probability to be selected decays exponentially as a function of rank.
Our experimental results on the MNIST dataset suggest that selecting batches speeds up both AdaDelta and Adam by a factor of about 5.
Conflicts:uni-freiburg.de, epfl.ch, inria.fr
Enter your feedback below and we'll get back to you as soon as possible.