No Trust, No Learning? Improving Federated Learning Security and Robustness with Reputation and Trust
Keywords: federated learning security, data poisoning, malicious attacks detection
TL;DR: In this paper, we address the vulnerability of Federated Learning to data poisoning attacks, where malicious clients can degrade global model performance through manipulation of local data, by introducing our Reputation and Trust mechanisms..
Abstract: In this paper, we address the vulnerability of Federated Learning (FL) to data poisoning attacks, where malicious clients can degrade global model performance through manipulation of local data. Existing defenses often face limitations in computational complexity and unrealistic assumptions about attacker's knowledge. To overcome these challenges, we develop a novel FL defense mechanism based on Reputation and Trust metrics. This approach dynamically identifies and excludes malicious clients by detecting statistical anomalies in their model updates and calculating historical metrics. Evaluated on the BloodMNIST dataset under data poisoning attacks, our method demonstrates superior performance compared to Multi-Krum, effectively detecting and removing malicious clients with lower errors. This results in improved model accuracy and robustness with no prior knowledge of the number of attackers. Our key contribution is a practical and effective defense strategy that enhances the security and robustness of FL systems operating in adversarial environments.
Submission Number: 9
Loading