Spatially Parallel ConvolutionsDownload PDF

12 Feb 2018 (modified: 05 May 2023)ICLR 2018 Workshop SubmissionReaders: Everyone
Abstract: The training of convolutional neural networks with large inputs on GPUs is limited by the available GPU memory capacity. In this work, we describe spatially parallel convolutions, which sidestep the memory capacity limit of a single GPU by partitioning tensors along their spatial axes across multiple GPUs. On modern multi-GPU systems, we demonstrate that spatially parallel convolutions attain excellent scaling when applied to input tensors with large spatial dimensions.
Keywords: deep learning, convolution, parallelism
TL;DR: Spatially parallel convolutions reduce per-GPU memory usage and scale excellently to multiple GPUs.
4 Replies

Loading