Abstract: Backdoor attacks significantly compromise the security of large language models by triggering them to output specific and controlled content. Currently, triggers for textual backdoor attacks fall into two categories: fixed-token triggers and sentence-pattern triggers. However, the former are typically easy to identify and filter, while the latter, such as syntax and style, do not apply to all original samples and may lead to semantic shifts. In this paper, inspired by cross-lingual (CL) prompts of LLMs in real-world scenarios, we propose a higher-dimensional trigger method at the paragraph level, namely CL-Attack. CL-Attack injects the backdoor by using texts with specific structures that incorporate multiple languages, thereby offering greater stealthiness and universality compared to existing backdoor attack techniques. Extensive experiments on different tasks and model architectures demonstrate that CL-Attack can achieve nearly 100 percents attack success rate with a low poisoning rate in both classification and generation tasks. We also empirically show that CL-Attack is more robust against current major defense methods compared to baseline backdoor attacks. Additionally, in response to CL-Attack, we further develop a new defense called TranslateDefense, which can partially mitigate the impact of CL-Attack.
Loading