A Resizable Mini-batch Gradient Descent based on a Multi-Armed Bandit

Seong Jin Cho, Sunghun Kang, Chang D. Yoo

Sep 27, 2018 ICLR 2019 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Determining the appropriate batch size for mini-batch gradient descent is always time consuming as it often relies on grid search. This paper considers a resizable mini-batch gradient descent (RMGD) algorithm based on a multi-armed bandit that achieves performance equivalent to that of best fixed batch-size. At each epoch, the RMGD samples a batch size according to a certain probability distribution proportional to a batch being successful in reducing the loss function. Sampling from this probability provides a mechanism for exploring different batch size and exploiting batch sizes with history of success. After obtaining the validation loss at each epoch with the sampled batch size, the probability distribution is updated to incorporate the effectiveness of the sampled batch size. Experimental results show that the RMGD achieves performance better than the best performing single batch size. It is surprising that the RMGD achieves better performance than grid search. Furthermore, it attains this performance in a shorter amount of time than grid search.
  • TL;DR: An optimization algorithm that explores various batch sizes based on probability and automatically exploits successful batch size which minimizes validation loss.
  • Keywords: Batch size, Optimization, Mini-batch gradient descent, Multi-armed bandit
0 Replies

Loading