Abstract: Federated Learning (FL) enhances decentralized machine learning
by safeguarding data privacy, reducing communication costs, and
improving model performance with diverse data sources. However,
FL faces vulnerabilities such as untargeted poisoning attacks and
targeted backdoor attacks, posing challenges to model integrity
and security. Preventing backdoors proves especially challenging
due to their stealthy nature. Existing mitigation techniques have
shown efficacy but often overlook realistic adversaries and diverse
data distributions.
This work introduces the concept of strong adaptive adversaries,
capable of adapting to multiple objectives simultaneously. Extensive
empirical testing reveals existing defenses’ vulnerability in this
adversary model. We present Metric-Cascades (MESAS), a novel
defense method tailored to more realistic scenarios and adversary
models. MESAS employs multiple detection metrics simultaneously
to combat poisoned model updates, posing a complex
multi-objective problem for adaptive attackers. In a comprehensive
evaluation across nine backdoors and three datasets, MESAS outperforms
existing defenses in distinguishing backdoors from data
distribution-related distortions within and across clients. MESAS offers
robust defense against strong adaptive adversaries in real-world
data settings, with a modest average overhead of just 24.37 seconds.
Loading