RoPe-Door: Toward Robust and Persistent Backdoor Data Poisoning Attacks in Federated Learning

Published: 01 Jan 2025, Last Modified: 11 Nov 2025IEEE Netw. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated Learning (FL) enables privacy-preserving collaborative training and builds a federation through exchanges of immutable data such as model parameters or gradient updates. FL remains vulnerable to a variety of attacks during critical processes like local model training and parameter transmission, in which the backdoor attack is particularly evident. In this paper, we propose a novel backdoor data poisoning attack method, RoPe-Door, using a trigger generation algorithm to improve the robustness and persistence of attacks even under Byzantine aggregation methods. We conduct extensive experiments on four image classification tasks to evaluate the effectiveness of RoPe-Door. The experimental results demonstrate that, compared to backdoor attacks using random triggers, RoPe-Door exhibits significant advantages in robustness, persistence, and attack effectiveness under both IID and Non-IID data settings.
Loading