SCAFF-PD: Communication Efficient Fair and Robust Federated Learning

Published: 19 Jun 2023, Last Modified: 21 Jul 2023FL-ICML 2023EveryoneRevisionsBibTeX
Keywords: federated learning, distributionally robust optimization
TL;DR: We present a fast and communication-efficient algorithm for distributionally robust federated learning.
Abstract: We present SCAFF-PD, a fast and communication-efficient algorithm for distributionally robust federated learning. Our approach improves fairness by optimizing a family of distributionally robust objectives tailored to heterogeneous clients. We leverage the special structure of these objectives, and design an accelerated primal dual (APD) algorithm which uses bias corrected local steps (as in {\sc Scaffold}) to achieve significant gains in communication efficiency and convergence speed. We evaluate SCAFF-PD on several benchmark datasets and demonstrate its effectiveness in improving fairness and robustness while maintaining competitive accuracy. Our results suggest that SCAFF-PD is a promising approach for federated learning in resource-constrained and heterogeneous settings.
Submission Number: 43
Loading