Keywords: Collusive Backdoor Attack, Federated Learning
Abstract: Federated Learning (FL) has been shown to be vulnerable to backdoor attacks conducted by malicious clients. Although many studies have enhanced the stealthiness and durability of backdoors, the full potential of collusive attacks in FL remains underexplored. Existing collusive attacks typically adopt a strategy where each malicious client trains independently. These attacks inevitably embed backdoor features into the uploaded updates and make them susceptible to detection. To fully exploit the collaborative capabilities of malicious clients, we propose a novel collusive attack, named CABA (Collusive Aggregation-based Backdoor Attack), where the backdoor behavior emerges only during model aggregation. In CABA, multiple malicious clients jointly craft a set of updates that individually exhibit no backdoor characteristics, allowing them to bypass defense mechanisms. However, when aggregated, these updates manifest the backdoor in the global model. Extensive experiments demonstrate that our proposed attack can successfully bypass six state-of-the-art defense mechanisms, demonstrating superior stealth and attack efficacy compared to existing collusive approaches. Our research highlights the critical importance of developing defense mechanisms that can inspect the combined behavior of model updates after aggregation.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 25274
Loading