Abstract: As LLM-based agents become increasingly prevalent, triggers implanted in user queries or environment feedback can activate hidden backdoors, raising critical concerns about safety vulnerabilities in agents.
However, traditional backdoor attacks are often detectable by safety audits that analyze the reasoning process of agents, hindering further progress in agent safety research.
To this end, we propose a novel backdoor implantation strategy called Dynamically Encrypted Multi-Backdoor Implantation Attack.
Specifically, we introduce dynamic encryption, which maps the backdoor into benign content, effectively circumventing safety audits.
To enhance stealthiness, we further decompose the backdoor into multiple sub-backdoor fragments.
Based on these advancements, backdoors are allowed to bypass safety audits significantly.
Additionally, we present AgentBackdoorEval, a dataset designed for the comprehensive evaluation of agent backdoor attacks.
Experimental results across multiple datasets demonstrate that our method achieves an attack success rate approaching 100% while maintaining a detection rate of 0%, illustrating its effectiveness in evading safety audits.
Our findings highlight the limitations of existing safety mechanisms in detecting advanced attacks, underscoring the urgent need for more robust defenses against backdoor threats.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: security and privacy, applications
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: English
Keywords: security and privacy, applications
Submission Number: 1486
Loading