Decentralized SGD with Asynchronous, Local and Quantized UpdatesDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Blind SubmissionReaders: Everyone
Keywords: distributed machine learning, SGD, decentralized algorithms, quantization
Abstract: The ability to scale distributed optimization to large node counts has been one of the main enablers of recent progress in machine learning. To this end, several techniques have been explored, such as asynchronous, quantized and decentralized communication--which significantly reduce the impact of communication and synchronization, as well as the ability for nodes to perform several local model updates before communicating--which reduces the frequency of communication. In this paper, we show that these techniques, which have so far largely been considered independently, can be jointly leveraged to minimize distribution cost for training neural network models via stochastic gradient descent (SGD). We consider a setting with minimal coordination: we have a large number of nodes on a communication graph, each with a local subset of data, performing independent SGD updates onto their local models. After some number of local updates, each node chooses an interaction partner uniformly at random from its neighbors, and averages a (possibly quantized) version of its local model with the neighbor's model. Our first contribution is in proving that, even under such a relaxed setting, SGD can still be guaranteed to converge under standard assumptions. The proof is based on a new connection with parallel load-balancing processes, and improves existing techniques by handling decentralization, asynchrony, quantization, and local updates, into a single framework, and bounding their impact. On the practical side, we implement variants of our algorithm and deploy them onto distributed environments, and show that they can successfully converge and scale for large-scale neural network training tasks, matching or even slightly improving the accuracy of previous methods.
One-sentence Summary: We provide a new decentralized, local variant of SGD which allows for asynchronous and quantized communication, while still ensuring convergence under standard assumptions, and good accuracy versus the sequential baseline.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Reviewed Version (pdf): https://openreview.net/references/pdf?id=DfpMKL3Kje
11 Replies

Loading