Asynchronous SGD Beats Minibatch SGD Under Arbitrary DelaysDownload PDF

Published: 31 Oct 2022, Last Modified: 03 Jul 2024NeurIPS 2022 AcceptReaders: Everyone
Keywords: Stochastic, Convex, Non-convex, Asynchronous, Parallel, Optimization
TL;DR: We prove much better theoretical guarantees for asynchronous SGD, which depend on the number of workers rather than the delays.
Abstract: The existing analysis of asynchronous stochastic gradient descent (SGD) degrades dramatically when any delay is large, giving the impression that performance depends primarily on the delay. On the contrary, we prove much better guarantees for the same asynchronous SGD algorithm regardless of the delays in the gradients, depending instead just on the number of parallel devices used to implement the algorithm. Our guarantees are strictly better than the existing analyses, and we also argue that asynchronous SGD outperforms synchronous minibatch SGD in the settings we consider. For our analysis, we introduce a novel recursion based on ``virtual iterates'' and delay-adaptive stepsizes, which allow us to derive state-of-the-art guarantees for both convex and non-convex objectives.
Supplementary Material: pdf
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/asynchronous-sgd-beats-minibatch-sgd-under/code)
10 Replies

Loading