Silencer: Pruning-aware Backdoor Defense for Decentralized Federated Learning

17 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Peer-to-peer, decentralized federated learning, pruning-aware training, Fisher-guided pruning
TL;DR: We proposed Silencer, a two-stage pruning-aware defense for decentralized federated learning
Abstract: Decentralized Federated Learning (DFL) with gossip protocol has to cope with a much larger attack surface under backdoor attacks, because by adopting aggregation without central coordination, a small percentage of adversaries in DFL may directly gossip the poisoned model updates to their neighbors, and subsequently broadcast the poisoning effect to the entire peer-to-peer (P2P) network. By examining backdoor attacks in DFL, we discover an exciting phenomenon that the poisoned parameters on adversaries have distinct patterns on their diagonal of empirical Fisher information (FI). Next, we show that such invariant FI patterns can be utilized to cure the poisoned models through effective model pruning. Unfortunately, we also observe an unignorable downgrade of benign accuracy of models when applying the naive FI-based pruning. To attenuate the negative impact of FI-based pruning, we present {\sc Silencer}, a \textit{dynamic two-stage model pruning scheme} with robustness and accuracy as dual goals. At the first stage, {\sc Silencer} employs a FI-based parameter pruning/reclamation process during per-client local training. Each client utilizes a sparse surrogate model for local training, in order to be aware and reduce the negative impact of the second stage. At the second stage, {\sc Silencer} performs consensus filtering to remove dummy/poisoned parameters from the global model, and recover a benign sparse core model for deployment. Extensive experiments, conducted with three representative DFL settings, demonstrate that {\sc Silencer} \textit{consistently} outperforms existing defenses by a large margin. Our code is available at \url{https://anonymous.4open.science/r/Silencer-8F08/}.
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1008
Loading