Preserved central model for faster bidirectional compression in distributed settingsDownload PDF

21 May 2021, 20:44 (modified: 26 Oct 2021, 20:02)NeurIPS 2021 PosterReaders: Everyone
Keywords: Federated Learning, Bidirectional Compression, Data Heterogeneity, Optimization
TL;DR: We propose a new algorithm for a distributed learning problem that performs bidirectional compression and achieves the same convergence rate as algorithms using only uplink compression by preserving the model on the central server.
Abstract: We develop a new approach to tackle communication constraints in a distributed learning problem with a central server. We propose and analyze a new algorithm that performs bidirectional compression and achieves the same convergence rate as algorithms using only uplink (from the local workers to the central server) compression. To obtain this improvement, we design MCM, an algorithm such that the downlink compression only impacts local models, while the global model is preserved. As a result, and contrary to previous works, the gradients on local servers are computed on perturbed models. Consequently, convergence proofs are more challenging and require a precise control of this perturbation. To ensure it, MCM additionally combines model compression with a memory mechanism. This analysis opens new doors, e.g. incorporating worker dependent randomized-models and partial participation.
Supplementary Material: pdf
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Code: https://github.com/philipco/mcm-bidirectional-compression
15 Replies

Loading