Involuntary Jailbreak: On Self-Prompting Attacks

TMLR Paper6720 Authors

30 Nov 2025 (modified: 27 Dec 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: In this study, we disclose a worrying new vulnerability in Large Language Models (LLMs), which we term involuntary jailbreak. Unlike existing jailbreak attacks, this weakness is distinct in that it does not involve a specific attack objective, such as generating instructions for building a bomb. Prior attack methods predominantly target localized components of the LLM guardrail. In contrast, involuntary jailbreaks may potentially compromise the entire guardrail structure, which our method reveals to be surprisingly fragile. We merely employ a single universal prompt to achieve this goal. In particular, we instruct LLMs to generate several questions that would typically be rejected, along with their corresponding in-depth responses (rather than a refusal). Remarkably, this simple prompt strategy consistently jailbreaks almost all leading LLMs tested, such as Claude Opus 4.1, Grok 4, Gemini 2.5 Pro, and GPT 4.1. With its wide targeting scope and universal effectiveness, this vulnerability makes existing jailbreak attacks seem less necessary until it is patched. More importantly, we hope this problem can motivate researchers and practitioners to re-evaluate the robustness of LLM guardrails and contribute to stronger safety alignment in the future.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Lingpeng_Kong1
Submission Number: 6720
Loading