GN: Guided Noise Eliminating Backdoors in Federated Learning

Published: 2024, Last Modified: 05 Nov 2025SMC 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated learning (FL) trains a model collaboratively but is susceptible to backdoor attacks for its privacy-preserving nature. Existing defenses against backdoor attacks in FL always make specific assumptions on data distributions among clients and are ineffective against sophisticated attacks. Although adding noise mitigates backdoors injected in the model, it simultaneously negatively impacts the main performance. To address the aforementioned issues, we propose a novel defense mechanism, Guided Noise (GN), that eliminates backdoors without compromising the model's main performance. GN achieves this by utilizing conductance to evaluate the importance of neurons and subsequently adding guided noise to suspected backdoor neurons selected by voting, which only disturbs the backdoor task. Extensive experimental evaluations of GN show its significant superiority over traditional noising-based defenses, making it a valuable replacement for existing noising to enhance the robustness of existing defenses against backdoor attacks in FL.
Loading