MESAS: Poisoning Defense for Federated Learning Resilient against Adaptive Attackers

Published: 25 Nov 2023, Last Modified: 06 Mar 20252023 ACM SIGSAC Conference on Computer and Communications SecurityEveryoneCC BY-NC 4.0
Abstract: Federated Learning (FL) enhances decentralized machine learning by safeguarding data privacy, reducing communication costs, and improving model performance with diverse data sources. However, FL faces vulnerabilities such as untargeted poisoning attacks and targeted backdoor attacks, posing challenges to model integrity and security. Preventing backdoors proves especially challenging due to their stealthy nature. Existing mitigation techniques have shown efficacy but often overlook realistic adversaries and diverse data distributions. This work introduces the concept of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously. Extensive empirical testing reveals existing defenses’ vulnerability in this adversary model. We present Metric-Cascades (MESAS), a novel defense method tailored to more realistic scenarios and adversary models. MESAS employs multiple detection metrics simultaneously to combat poisoned model updates, posing a complex multi-objective problem for adaptive attackers. In a comprehensive evaluation across nine backdoors and three datasets, MESAS outperforms existing defenses in distinguishing backdoors from data distribution-related distortions within and across clients. MESAS offers robust defense against strong adaptive adversaries in real-world data settings, with a modest average overhead of just 24.37 seconds.
Loading