On the Computational Inefficiency of Large Batch Sizes for Stochastic Gradient DescentDownload PDF

Sep 27, 2018 (edited Dec 21, 2018)ICLR 2019 Conference Blind SubmissionReaders: Everyone
  • Abstract: Increasing the mini-batch size for stochastic gradient descent offers significant opportunities to reduce wall-clock training time, but there are a variety of theoretical and systems challenges that impede the widespread success of this technique (Daset al., 2016; Keskar et al., 2016). We investigate these issues, with an emphasis on time to convergence and total computational cost, through an extensive empirical analysis of network training across several architectures and problem domains, including image classification, image segmentation, and language modeling. Although it is common practice to increase the batch size in order to fully exploit available computational resources, we find a substantially more nuanced picture. Our main finding is that across a wide range of network architectures and problem domains, increasing the batch size beyond a certain point yields no decrease in wall-clock time to convergence for either train or test loss. This batch size is usually substantially below the capacity of current systems. We show that popular training strategies for large batch size optimization begin to fail before we can populate all available compute resources, and we show that the point at which these methods break down depends more on attributes like model architecture and data complexity than it does directly on the size of the dataset.
  • Keywords: Deep learning, large batch training, scaling rules, stochastic gradient descent
  • TL;DR: Large batch training results in rapidly diminishing returns in wall-clock time to convergence to find a good model.
10 Replies