On the Choice of Learning Rate for Local SGD

Published: 23 Jan 2024, Last Modified: 23 Jan 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Distributed data-parallel optimization accelerates the training of neural networks, but requires constant synchronization of gradients between the workers, which can become a bottleneck. One way to reduce communication overhead is to use Local SGD, where each worker asynchronously takes multiple local gradient steps, after which the model weights are averaged. In this work, we discuss the choice of learning rate for Local SGD, showing that it faces an intricate trade-off. Unlike in the synchronous case, its gradient estimate is biased, with the bias dependent on the learning rate itself. Thus using learning rate scaling techniques designed for faster convergence in the synchronous case with Local SGD results in a performance degradation as previously observed. To analyze the manifestation of this bias, we study convergence behaviour of Local SGD and synchronous data-parallel SGD when using their optimal learning rates. Our experiments show that the optimal learning rate for Local SGD differs substantially from that of SGD, and when using it the performance of Local SGD matches that of SGD. However, this performance comes at the cost of added training iterations, rendering Local SGD faster than SGD only when communication is much more time-consuming than computation. This suggests that Local SGD may be of limited practical utility.
Submission Length: Long submission (more than 12 pages of main content)
Video: https://youtu.be/0D5B1ysq5Zg
Assigned Action Editor: ~Robert_M._Gower1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1577
Loading