Cooperating or Kicking Out: Defending Against Poisoning Attacks in Federated Learning via the Evolution of Cooperation

Published: 2025, Last Modified: 10 Nov 2025IEEE Trans. Dependable Secur. Comput. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated learning (FL) trains a global model by aggregating local updates from multiple clients under a server's guidance. Despite its potential, FL is vulnerable to poisoning attacks where malicious clients intentionally corrupt their updates, compromising the global model's accuracy. Current defense strategies aim to tolerate or remove such corrupt updates, but they are not fully effective to prevent malicious clients from sending poisonous updates to the server, leaving the global model at risk. We propose a novel approach based on the evolution of cooperation, which promotes system-wide collaboration. Our defense method allows the server to selectively engage clients in the training process, encouraging them to provide clean updates or exclude those persistently malicious. We also introduce an attack framework where clients initially send clean updates to gain trust before sending malicious ones later. This model, designed to simulate advanced threats, can adapt to various attack types to increase its impact. Our experimental results show that this defense significantly improves resilience against such attacks, effectively safeguarding the global model even under complex threat scenarios.
Loading