Paper Link: https://openreview.net/forum?id=v5MwUDA11oO
Paper Type: Long paper (up to eight pages of content + unlimited references and appendices)
Abstract: Trojan attacks raise serious security concerns. In this paper, we investigate the underlying mechanism of Trojaned BERT models. We observe the attention focus drifting behavior of Trojaned models, i.e., when encountering an poisoned input, the trigger token hijacks the attention focus regardless of the context. We provide a thorough qualitative and quantitative analysis of this phenomenon, revealing insights into the Trojan mechanism. Based on the observation, we propose an attention-based Trojan detector to distinguish Trojaned models from clean ones. To the best of our knowledge, we are the first to analyze the Trojan mechanism and develop a Trojan detector based on the transformer's attention.
Presentation Mode: This paper will be presented virtually
Virtual Presentation Timezone: UTC-3
Copyright Consent Signature (type Name Or NA If Not Transferrable): Weimin Lyu
Copyright Consent Name And Address: Stony Brook University, Stony Brook, NY 11794