Find Your Friends: Personalized Federated Learning with the Right Collaborators

TMLR Paper1208 Authors

30 May 2023 (modified: 05 Oct 2023)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: In the traditional federated learning setting, a central server coordinates a network of clients to train one global model. However, the global model may serve many clients poorly due to data heterogeneity. This problem can be mitigated when participating clients learn personalized models that can better serve their own needs. By noting that each client’s distribution can be represented as a mixture of all clients’ distributions, we derive a principled algorithm based on expectation maximization. Our framework, FedeRiCo, estimates the utilities of other participants’ models on each client’s data so that everyone can select the right collaborators for learning. As a result, each client can learn as much or as little from other clients as is optimal for its local data distribution. Additionally, we theoretically analyze the convergence of FedeRiCo and empirically demonstrate its communication efficiency even in the fully decentralized setting. Our algorithm outperforms other federated, personalized, and/or decentralized approaches on several benchmark datasets, being the only approach that consistently performs better than training with local data alone.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We updated the convergence proof in appendix B to address reviewer's concerns.
Assigned Action Editor: ~Aurélien_Bellet1
Submission Number: 1208
Loading