DPFL: Decentralized Personalized Federated Learning

Published: 22 Jan 2025, Last Modified: 21 Mar 2025AISTATS 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: This work addresses the challenges of data heterogeneity and communication constraints in decentralized federated learning (FL). We introduce decentralized personalized FL (DPFL), a bi-level optimization framework that enhances personalized FL by leveraging combinatorial relationships among clients, enabling fine-grained and targeted collaborations. By employing a constrained greedy algorithm, DPFL constructs a collaboration graph that guides clients in choosing suitable collaborators, enabling personalized model training tailored to local data while respecting a fixed and predefined communication and resource budget. Our theoretical analysis demonstrates that the proposed objective for constructing the collaboration graph yields superior or equivalent performance compared to any alternative collaboration structures, including pure local training. Extensive experiments across diverse datasets show that DPFL consistently outperforms existing methods, effectively handling non-IID data, reducing communication overhead, and improving resource efficiency in real-world decentralized FL scenarios. The code can be accessed at: https://github.com/salmakh1/DPFL.
Submission Number: 2042
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview