Involuntary Jailbreak

Published: 15 Oct 2025, Last Modified: 24 Nov 2025BioSafe GenAI 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Jailbreak attack, safety guardrail, LLMs
Abstract: In this study, we disclose a worrying new vulnerability in Large Language Models (LLMs), which we term involuntary jailbreak. Unlike existing jailbreak attacks, this weakness is distinct in that it does not involve a specific attack objective, such as generating instructions for building a bomb. Prior attack methods predominantly target localized components of the LLM guardrail. In contrast, involuntary jailbreaks may potentially compromise the entire guardrail structure, which our method reveals to be surprisingly fragile. We merely employ a single universal prompt to achieve this goal. In particular, we instruct LLMs to generate several questions that would typically be rejected, along with their corresponding in-depth responses (rather than a refusal). Remarkably, this simple prompt strategy consistently jailbreaks almost all leading LLMs tested, such as Claude Opus 4.1, Grok 4, Gemini 2.5 Pro, and GPT 4.1. With its wide targeting scope and universal effectiveness, this vulnerability makes existing jailbreak attacks seem less necessary until it is patched. More importantly, we hope this problem can motivate researchers and practitioners to re-evaluate the robustness of LLM guardrails and contribute to stronger safety alignment in the future.
Submission Number: 5
Loading