Involuntary Jailbreak: On Self-Prompting Attacks

Published: 18 Mar 2026, Last Modified: 18 Mar 2026Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: In this study, we disclose a worrying new vulnerability in Large Language Models (LLMs), which we term involuntary jailbreak. Unlike existing jailbreak attacks, this weakness is distinct in that it does not involve a specific attack objective, such as generating instructions for building a bomb. Prior attack methods predominantly target localized components of the LLM guardrail. In contrast, involuntary jailbreaks may potentially compromise the global guardrail structure, which our method reveals to be surprisingly fragile. We merely employ a single universal prompt to achieve this goal. In particular, we instruct LLMs to generate several questions (self-prompting) that would typically be rejected, along with their corresponding in-depth responses (rather than a refusal). Remarkably, this simple prompt strategy consistently jailbreaks almost all leading LLMs tested, such as Claude Opus 4.1, Grok 4, Gemini 2.5 Pro, and GPT 4.1. With its wide targeting scope and near-universal effectiveness, this vulnerability makes existing jailbreak attacks seem less necessary until it is patched. More importantly, we hope this problem can motivate researchers and practitioners to rethink and re-evaluate the robustness of LLM guardrails and contribute to stronger safety alignment in the future.
Submission Type: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We made three main categories of revisions compared to the previously submitted version. - To address concerns regarding potential over-claiming, we implemented the following three edits: - In the abstract, we revised “compromise the entire guardrail structure” to “compromise the global guardrail structure”. This improvement is used to contrast our claim with prior work that focuses primarily on "local components of LLM guardrails". - In the abstract, we changed “its wide targeting scope and universal effectiveness” to “its wide targeting scope and near-universal effectiveness”. - In the conclusion, we revised “even the most robust guardrails” to “the majority of robust guardrails”. - We improved the caption of Figure 2 to enhance clarity. - We corrected several typos, such as revising “refusal supress” to “refusal suppression.”
Code: https://github.com/guoyang9/Involuntary-Jailbreak
Supplementary Material: zip
Assigned Action Editor: ~Lingpeng_Kong1
Submission Number: 6720
Loading