PEARL-Prox: Proximal Algorithm for Resolving Player Drift in Multiplayer Federated Learning

Published: 22 Sept 2025, Last Modified: 01 Dec 2025NeurIPS 2025 WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Learning, Game theory, Multiplayer games, Convergence guarantees, Communication efficiency, Multiplayer Federated Learning
TL;DR: Proposes Per-Player Local Proximal Algorithm (PEARL-Prox) resolving player drift in Multiplayer Federated Learning (MpFL).
Abstract: Recently, Yoon et al. (2025) introduced multiplayer federated learning (MpFL), a novel federated learning framework capable of formulating the strategically behaving, rational clients. In MpFL, the clients are modeled as players of a multiplayer game with individual objectives, aiming to seek an equilibrium. While Per-Player Local Stochastic Gradient Descent (PEARL-SGD) algorithm has been proposed as a counterpart of Local SGD in the MpFL setup, it exhibits the *player drift* phenomenon—excessive local updates by individual players lead to divergence of the global dynamics. In this work, we formalize the concept of player drift and propose the *Per-Player Local Proximal Algorithm (PEARL-Prox)* to resolve it. PEARL-Prox lets each player optimize a regularized objective with high accuracy, ensuring convergence to the equilibrium while enabling the players to exploit their local compute budgets. Consequently, PEARL-Prox offers a significantly improved communication complexity of $\mathcal{O}\left(\log\epsilon^{-1}\right)$ compared to the $\Omega\left(\epsilon^{-1/2}\right)$ complexity of PEARL-SGD under the same theoretical assumptions.
Submission Number: 125
Loading