On the Unreasonable Effectiveness of Federated Averaging with Heterogeneous Data

TMLR Paper2340 Authors

06 Mar 2024 (modified: 20 Apr 2024)Under review for TMLREveryoneRevisionsBibTeX
Abstract: Existing theoretical results (such as (Woodworth et al., 2020a)) predict that the performance of federated averaging (FedAvg) is exacerbated by high data heterogeneity. However, in practice, FedAvg converges pretty well on several naturally heterogeneous datasets. In order to explain this seemingly unreasonable effectiveness of FedAvg that contradicts previous theoretical predictions, this paper introduces the client consensus hypothesis: on certain federated datasets, the average of local models updates on clients starting from the optimum is close to zero. We prove that under this hypothesis, data heterogeneity does not exacerbate the convergence of FedAvg. Moreover, we show that this hypothesis holds for a linear regression problem and some naturally heterogeneous datasets such as FEMNIST and StackOverflow. Therefore, we believe that this hypothesis can better explain the performance of FedAvg in practice.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Furong_Huang1
Submission Number: 2340
Loading