Abstract: Federated learning (FL) allows multiple clients to train an efficient deep-learning model collaboratively but is susceptible to backdoor attacks. Traditional detection-based defenses depend on specific metrics to distinguish client gradients. Defense-aware attackers exploit this by constraining attack gradients on these metrics to evade detection, leading to metric-constrained attacks. This paper concretely instantiates such threats and introduces cosine-constrained attacks, which successfully compromise advanced defenses based on cosine distance. To address the aforementioned challenge, we propose Scope, a novel defense that detects cosine-constrained attacks using cosine distance by exposing the constrained backdoor dimensions of attack gradients. Scope employs dimension-wise normalization and differential scaling to amplify the distinction between backdoor dimensions and benign or unused ones, countering sophisticated attackers’ attempts to obscure them. Moreover, we develop a novel clustering approach, namely Dominant Gradient Clustering (DGC), to isolate and eliminate backdoor gradients. Extensive experiments across various datasets, models, FL settings, and adversary scenarios demonstrate that Scope consistently outperforms existing defenses by a significant margin, especially against the cosine-constrained attack. Additionally, we present a Scope-tailored attack designed to evade Scope, but it remains ineffective even when maximizing stealthiness, further underscoring the robustness of Scope. We release our source code at: https://github.com/siquanhuang/Scope.
Loading