A Game-theoretic Approach to Personalized Federated Learning Based on Target Interpolation

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Personalized Federated Learning, Game-theoretic Approach, Target Interpolation
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Contrary to classical Federated Learning (FL) that focuses on collaborative learning of a shared global model via a central server, Personalized Federated Learning (PFL) trains a separate model for each user in order to address data heterogeneity and meet local demands. This paper proposes pFedGT, a method for personalized Federated Learning based on a Game-theoretic approach, that adopts a novel formulation termed "Target interpolation." In specific, each user solves a local optimization problem that comprises of a weighted average of two terms: one for the local loss (based on the user's data) and one for the global loss (based on all the data in the system). The latter is, of course, not accessible to the users (due to the large data volumes and privacy concerns) and it is approximated using second-order expansion which allows for an efficient federated implementation. In pFedGT, the users play a game (by minimizing their local problems), and the algorithm supports partial participation in each round. We prove existence and uniqueness of a Nash equilibrium and establish a linear convergence rate under standard assumptions. Extensive experiments on real datasets under variable levels of statistical heterogeneity are used to portray the merits of the proposed solution. In particular, our method achieves on average 2.6\% and 3.0\% higher accuracy on CIFAR-10 and CIFAR-100 datasets, and 3.17\% on HAR dataset than leading baselines.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4673
Loading