Spatially Parallel Convolutions

Peter Jin, Boris Ginsburg, Kurt Keutzer

Feb 12, 2018 (modified: Jun 04, 2018) ICLR 2018 Workshop Submission readers: everyone Show Bibtex
  • Abstract: The training of convolutional neural networks with large inputs on GPUs is limited by the available GPU memory capacity. In this work, we describe spatially parallel convolutions, which sidestep the memory capacity limit of a single GPU by partitioning tensors along their spatial axes across multiple GPUs. On modern multi-GPU systems, we demonstrate that spatially parallel convolutions attain excellent scaling when applied to input tensors with large spatial dimensions.
  • Keywords: deep learning, convolution, parallelism
  • TL;DR: Spatially parallel convolutions reduce per-GPU memory usage and scale excellently to multiple GPUs.
0 Replies

Loading