Abstract: Decentralized federated learning (DFL) has gained significant attention due to its ability to facilitate collaborative model training without relying on a central server. However, it is highly vulnerable to backdoor attacks, where malicious participants can manipulate model updates to embed hidden functionalities. In this paper, we propose BaDFL, a novel Backdoor Attack defense mechanism for Decentralized Federated Learning. BaDFL enhances robustness by applying strategic model clipping at the local update level. To the best of our knowledge, BaDFL is the first decentralized federated learning algorithm with theoretical guarantees against model poisoning attacks. Specifically, BaDFL achieves an asymptotically optimal convergence rate of $O(\frac{1}{\sqrt{nT}})$, where $n$ is the number of nodes and $T$ is the maximum communication round number. Furthermore, we provide a comprehensive analysis under two different attack scenarios, showing that BaDFL maintains robustness within a specific defense radius. Extensive experimental results show that, on average, BaDFL can effectively defend against model poisoning within 8 mitigation rounds, with about a 1% drop in accuracy.
External IDs:dblp:journals/tc/YuanZZZSY25
Loading