Abstract: Clustering clients with similar objectives and learning a model per cluster is an intuitive and interpretable approach to personalization in federated learning. However, doing so with provable and optimal guarantees has remained an open challenge. In this work, we formalize personalized federated learning as a stochastic optimization problem. We propose simple clustering-based algorithms which iteratively identify and train within clusters, using local client gradients. Our algorithms have optimal convergence rates which asymptotically match those obtained if we knew the true underlying clustering of the clients, and are provably robust in the Byzantine setting where some fraction of the clients are malicious.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We have de-anonymized the submission.
Supplementary Material: zip
Assigned Action Editor: ~Zachary_B._Charles1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1440
Loading