FedPnP:A Plug and Play Approach For Personalized Graph-Structured Federated Learning

22 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Personalized Federated Learning, Graph Federated Learning, Inverse Problems, Plug and Play, Federated Learning, Graph Signal Processing, Graph filtering, Half-Quadratic-Splitting
TL;DR: FedPnP leverages client graph connections and solves the personalized FL problem by establishing a bridge between the optimization problem and inverse problems.
Abstract: In Personalized Federated Learning (PFL), existing methods often overlook the intricate interconnections between clients and their local datasets, limiting effective information sharing. In this work, we introduce "FedPnP", a novel approach that leverages the inherent graph-based relationships among clients. Clients connected by a graph tend to exhibit similar model responses to similar input data, leading to a graph-based optimization problem linked to inverse problems like compressed sensing. To tackle this optimization problem, we employ a Half-Quadratic-Splitting technique (HQS) to effectively decompose it into two subproblems. The first subproblem, acting as a data fidelity term, ensures local models perform well on their respective datasets, while the second, serving as a sparsity-inducing term, promotes the smoothness of local model weights on the graph. Notably, we introduce a structural proximal term, a generalization for FedProx, in the first subproblem and demonstrate that any graph denoiser with a controllable noise parameter can be integrated as the second subproblem, offering flexibility without explicit derivation. We evaluate FedPnP on computer vision datasets (CIFAR-10, MNIST) and a human activity recognition dataset (HARBOX) to test its performance in real-world PFL scenarios. Empirical results confirm that FedPnP outperforms state-of-the-art algorithms. This novel bridge between PFL and inverse problems opens up the potential for cross-pollination of solutions, yielding superior algorithms for PFL tasks.
Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6218
Loading