How to Inject Backdoors with Better Consistency: Logit Anchoring on Clean DataDownload PDF

29 Sept 2021, 00:30 (modified: 14 Mar 2022, 03:01)ICLR 2022 PosterReaders: Everyone
Keywords: backdoor learning, weight perturbation, consistency
Abstract: Since training a large-scale backdoored model from scratch requires a large training dataset, several recent attacks have considered to inject backdoors into a trained clean model without altering model behaviors on the clean data. Previous work finds that backdoors can be injected into a trained clean model with Adversarial Weight Perturbation (AWP), which means the variation of parameters are small in backdoor learning. In this work, we observe an interesting phenomenon that the variations of parameters are always AWPs when tuning the trained clean model to inject backdoors. We further provide theoretical analysis to explain this phenomenon. We are the first to formulate the behavior of maintaining accuracy on clean data as the consistency of backdoored models, which includes both global consistency and instance-wise consistency. We extensively analyze the effects of AWPs on the consistency of backdoored models. In order to achieve better consistency, we propose a novel anchoring loss to anchor or freeze the model behaviors on the clean data, with a theoretical guarantee.
One-sentence Summary: We propose a novel logit anchoring approach for better global and instance-wise consistency in backdoor learning.
12 Replies