TEG-DI: Dynamic incentive model for Federated Learning based on Tripartite Evolutionary Game

Published: 01 Jan 2025, Last Modified: 03 Aug 2025Neurocomputing 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated Learning (FL), as a distributed machine learning approach that protects data privacy, has gained significant attention in recent years. However, some participants benefit from the global model without contributing private data (i.e., free-riding), which challenges FL’s development. Although existing studies attempt to encourage data contribution through game-theoretic models, they often overlook dynamic strategy changes across multiple training iterations, particularly the spontaneous transitions in FR behavior. Moreover, these studies do not adequately address participants contributing minimal data or only partially engaging in training and we define this behavior as partial free-riding (PFR). To address these issues, we propose a dynamic incentive model based on tripartite evolutionary game theory (TEG-DI), dividing participants into the federated organizer (FO) and the federated collaborator (FC) while constructing a novel tripartite game framework with the central server (CS). We treat PFR as a strategy adopted by collaborators and analyze the evolutionarily stable strategy (ESS) to explore the complex interactions between participants and the server across six scenarios. Based on the analysis, we dynamically adjust the incentive mechanisms and propose optimization suggestions to address potential non-ideal evolutionary states. The experimental results demonstrate that the TEG-DI model effectively encourages free-riders to transition into honest participants, reducing the free-riders gains by 10.51%.
Loading