Keywords: Federated Learning, Data Heterogeneity, Client Reshuffling
Abstract: Data heterogeneity and low client participation have been the key challenges in federated learning.
Client-reshuffling-based federated learning methods were recently introduced to improve the client participation efficiency. However, the client-reshuffling-based methods still suffer from the data heterogeneity issue. To fill in this gap, we propose a new algorithm, FedCDR, to mitigate the data heterogeneity challenge in client-reshuffling-based federated learning. Our algorithm achieves the state-of-the-art $O(\epsilon^{-2})$ convergence rate for finding an $\epsilon$-approximate stationary point under standard assumptions. Unlike previous works, our method achieves convergence \textbf{independent} of the degree of data heterogeneity, \emph{i.e.} our algorithm converges fast in highly heterogeneous data environments, whereas previous methods suffer from non-convergence or slow convergence rates. Moreover, our algorithm uses inexact local solvers, which are essential for practical
implementation and requirements. In our theoretical analysis, client-reshuffling-based approaches introduce a new technical challenge: non-i.i.d. sampling bias, which complicates the convergence analysis. We design a novel potential function and adopt advanced analytical techniques to address this challenge. Our experimental results demonstrate the advantages of our method over existing algorithms on both synthetic and benchmark datasets.
Supplementary Material: pdf
Primary Area: optimization
Submission Number: 4944
Loading