SAPipe: Staleness-Aware Pipeline for Data Parallel DNN TrainingDownload PDF

Published: 31 Oct 2022, Last Modified: 25 Oct 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: data parallelism, communication optimization, staleness mitigation
Abstract: Data parallelism across multiple machines is widely adopted for accelerating distributed deep learning, but it is hard to achieve linear speedup due to the heavy communication. In this paper, we propose SAPipe, a performant system that pushes the training speed of data parallelism to its fullest extent. By introducing partial staleness, the communication overlaps the computation with minimal staleness in SAPipe. To mitigate additional problems incurred by staleness, SAPipe adopts staleness compensation techniques including weight prediction and delay compensation with provably lower error bounds. Additionally, SAPipe presents an algorithm-system co-design with runtime optimization to minimize system overhead for the staleness training pipeline and staleness compensation. We have implemented SAPipe in the BytePS framework, compatible to both TensorFlow and PyTorch. Our experiments show that SAPipe achieves up to 157% speedups over BytePS (non-stale), and outperforms PipeSGD in accuracy by up to 13.7%.
TL;DR: We design a performant and staleness-aware communication pipeline system for accelerating distributed DNN training.
Supplementary Material: pdf
15 Replies

Loading