Keywords: distributed training, large-scale, llm
TL;DR: Distributed training where only a subset of the outer gradients is communicated
Abstract: Training of large language models (LLMs) is typically distributed across a large number of accelerators to reduce training time. Since internal states and parameter gradients need to be exchanged at each and every single gradient step, all devices need to be co-located using low-latency high-bandwidth communication links to support the required high volume of data exchange. Recently, algorithms like DiLoCo have relaxed the constraint that all devices need co-location: accelerators can be grouped into ``workers'', where synchronizations between workers need only occur infrequently. This in turn means that workers can afford being connected by lower bandwidth communication links without affecting learning quality. However, in these methods, communication across workers still requires the same peak bandwidth as before, as the synchronizations require all parameters to be exchanged across all workers. In this paper, we improve DiLoCo in three ways. First, we synchronize only subsets of parameters in sequence, rather than all at once, which greatly reduces peak bandwidth. Second, we allow workers to continue training while synchronizing, which decreases wall clock time. Third, we quantize the data exchanged by workers, which further reduces bandwith across workers. We show experimentally that by properly combining these modifications we can distribute training of billion-scale parameters and attain models of similar quality as before, but reducing required bandwidth by a factor of up to two orders of magnitude.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
Author Guide: I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
Submission Number: 1025
Loading