Stand on The Shoulders Of Giants: Building JailExpert from Previous Attack Experience

ACL ARR 2025 February Submission4099 Authors

15 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) generate human-aligned content under certain safety constraints. However, the current known technique ``jailbreak prompt'' can circumvent safety-aligned measures and induce LLMs to output malicious content. Research on Jailbreaking can help identify vulnerabilities in LLMs and guide the development of robust security frameworks. To circumvent the issue of attack templates becoming obsolete as models evolve, existing methods adopt iterative mutation and dynamic optimization to facilitate more automated jailbreak attacks. However, these methods face two challenges: inefficiency and repetitive optimization, as they overlook the value of past attack experiences. To better integrate past attack experiences to assist current jailbreak attempts, we propose the $\textbf{JailExpert}$, an automated jailbreak framework, which is the first to achieve a formal representation of experience structure, group experiences based on semantic drift, and support the dynamic updating of the experience pool. Extensive experiments demonstrate that JailExpert significantly improves both attack effectiveness and efficiency. Compared to the current state-of-the-art black-box jailbreak method, JailExpert achieves an average increase of 18\% in attack success rate and 2.5 times improvement in efficiency.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: security and privacy, adversarial attacks/examples/training
Contribution Types: Model analysis & interpretability, Reproduction study, Approaches low compute settings-efficiency, Data analysis
Languages Studied: English
Submission Number: 4099
Loading