Sporadicity in Decentralized Federated Learning: Theory and Algorithm

19 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Decentralized Federated Learning, Distributed Optimization, Sporadicity, Resource Efficiency, Sporadic SGDs, Anarchic Federated Learning
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Decentralized Federated Learning methods are a family of techniques employed by devices in a distributed setup to (i) reach consensus over a common model which (ii) is optimal with respect to the global objective function. As this is carried out without the presence of any centralized server, prominent challenges of conventional Federated Learning become even more significant, namely heterogeneous data distributions among devices and their varying resource capabilities. In this work, we propose $\textit{Decentralized Sporadic Federated Learning}$ ($\texttt{DSpodFL}$), which introduces sporadicity to decentralized federated learning. $\texttt{DSpodFL}$ includes sporadic stochastic gradient calculations and model exchanges for aggregations. Our motivation is to achieve joint computation and communication savings without losing statistical performance. We prove that by using a constant step size, our method achieves a geometric convergence rate to a finite optimality gap. Through numerical evaluation, we demonstrate the resource savings achieved by $\texttt{DSpodFL}$ compared to the existing baselines.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: pdf
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2036
Loading