Breaking Distributed Backdoor Defenses for Federated Learning in Non-IID Settings

Published: 01 Jan 2022, Last Modified: 15 May 2025MSN 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated learning (FL) is a privacy-preserving distributed machine learning architecture to solve the problem of data silos. While FL is proposed to protect data security, it still faces security challenges. Backdoor attacks are potential threats in FL and aim to manipulate the model performance on chosen backdoor tasks by injecting adversarial triggers. As a more insidious variant of backdoor attacks, distributed backdoor attacks decompose the same global trigger into multiple local patterns and respectively assign them to different attackers. In this paper, we study deep into the entire training process of current distributed backdoor attack (DBA) and propose a cooperative DBA method for non-IID FL to break through existing defenses. To bypass the cosine similarity detection, we design an update rotation and scaling technique based on two independent training to well disguise malicious updates among benign updates. We conduct an exhaustive experiment to evaluate the performance of our proposed method under the state-of-the-art defenses. The experimental results show that it is much more stealthy than the current DBA method while maintaining the high backdoor attack intensity.
Loading