Submission Type: Regular Long Paper
Submission Track: NLP Applications
Submission Track 2: Interpretability, Interactivity, and Analysis of Models for NLP
Keywords: Backdoor Attack, BERT, Attention Loss, natural language processing
TL;DR: We propose a Trojan Attention Loss to enhance the Trojan behavior by directly manipulating the attention pattern.
Abstract: Recent studies have revealed that Backdoor Attacks can threaten the safety of natural language processing (NLP) models. Investigating the strategies of backdoor attacks will help to understand the model's vulnerability.
Most existing textual backdoor attacks focus on generating stealthy triggers or modifying model weights. In this paper, we directly target the interior structure of neural networks and the backdoor mechanism. We propose a novel Trojan Attention Loss (TAL), which enhances the Trojan behavior by directly manipulating the attention patterns. Our loss can be applied to different attacking methods to boost their attack efficacy in terms of attack successful rates and poisoning rates. It applies to not only traditional dirty-label attacks, but also the more challenging clean-label attacks. We validate our method on different backbone models (BERT, RoBERTa, and DistilBERT) and various tasks (Sentiment Analysis, Toxic Detection, and Topic Classification).
Submission Number: 2009
Loading