Synthetic data shuffling accelerates the convergence of federated learning under data heterogeneity

Published: 01 Apr 2024, Last Modified: 01 Apr 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: In federated learning, data heterogeneity is a critical challenge. A straightforward solution is to shuffle the clients' data to homogenize the distribution. However, this may violate data access rights, and how and when shuffling can accelerate the convergence of a federated optimization algorithm is not theoretically well understood. In this paper, we establish a precise and quantifiable correspondence between data heterogeneity and parameters in the convergence rate when a fraction of data is shuffled across clients. We discuss that shuffling can, in some cases, quadratically reduce the gradient dissimilarity with respect to the shuffling percentage, accelerating convergence. Inspired by the theory, we propose a practical approach that addresses the data access rights issue by shuffling locally generated synthetic data. The experimental results show that shuffling synthetic data improves the performance of multiple existing federated learning algorithms by a large margin.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/lyn1874/fedssyn
Supplementary Material: zip
Assigned Action Editor: ~Nihar_B_Shah1
Submission Number: 1888
Loading