$DPOT_{L_0}$: Concealing Backdoored model updates in Federated Learning by Data Poisoning with $L_0$-norm-bounded Optimized Triggers
Abstract: Traditional backdoor attacks in Federated Learning (FL) that rely on fixed trigger patterns and model poisoning exhibit deficient attacking performance against state-of-the-art defenses due to the significant divergence between malicious and benign client model updates. To effectively conceal malicious model updates among benign ones, we propose $DPOT_{L_0}$, a backdoor attack strategy in FL that dynamically constructs a per-round backdoor objective by optimizing an $L_0$-norm-bounded backdoor trigger, making backdoor data have minimal effect on model updates and preserving the global model's main-task performance. We theoretically justify the concealment property of $DPOT_{L_0}$'s model updates in linear models. Our experiments show that $DPOT_{L_0}$, via only a data-poisoning attack, effectively undermines state-of-the-art defenses and outperforms existing backdoor attack techniques on various datasets.
Primary Area: Social Aspects->Security
Keywords: Data Poisoning, Backdoor Attack, Federated Learning
Submission Number: 14632
Loading