AutoAdv: Automated Adversarial Prompting for Multi-Turn Jailbreaking of Large Language Models

ACL ARR 2025 May Submission4576 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) are susceptible to jailbreaking attacks, where carefully crafted malicious inputs bypass safety guardrails and provoke harmful responses. We introduce AutoAdv, a novel automated framework that generates adversarial prompts and assesses vulnerabilities in LLM safety mechanisms. Our approach employs an attacker LLM to create disguised malicious prompts using strategic rewriting techniques, tailored system prompts, and optimized hyperparameter settings. The core innovation is a dynamic, multiturn attack strategy that analyzes unsuccessful jailbreak attempts to iteratively develop more effective follow-up prompts. We evaluate the attack success rate (ASR) using the StrongREJECT framework across multiple interaction turns. Extensive empirical testing on state-of-the-art models, including ChatGPT, Llama, DeepSeek, Qwen, Gemma, and Mistral, reveals significant weaknesses, with AutoAdv achieving an ASR of 86% on the Llama-3.1-8B. These findings indicate that current safety mechanisms remain susceptible to sophisticated multiturn attacks. Warning: This paper includes examples of harmful and sensitive language; reader discretion is advised.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: Adversarial Attacks, Red teaming, Automatic evaluation, Prompting, Safety and alignment, Question generation
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 4576
Loading