Federated Optimization for Heterogeneous NetworksDownload PDF

Anonymous

16 May 2019 (modified: 05 May 2023)AMTL 2019Readers: Everyone
Keywords: federated learning, statistical heterogeneity, distributed optimization
Abstract: Federated learning involves training and effectively combining machine learning models from distributed partitions of data (i.e., tasks) on edge devices, and be naturally viewed as a multi- task learning problem. While Federated Averaging (FedAvg) is the leading optimization method for training non-convex models in this setting, its behavior is not well understood in realistic federated settings when the devices/tasks are statistically heterogeneous, i.e., where each device collects data in a non-identical fashion. In this work, we introduce a framework, called FedProx, to tackle statistical heterogeneity. FedProx encompasses FedAvg as a special case. We provide convergence guarantees for FedProx through a device dissimilarity assumption. Our empirical evaluation validates our theoretical analysis and demonstrates the improved robustness and stability of FedProx for learning in heterogeneous networks.
TL;DR: We introduce FedProx, a framework to tackle statistical heterogeneity in federated settings with convergence guarantees and improved robustness and stability.
0 Replies

Loading