Three Minds, One Legend: Jailbreak Large Reasoning Model with Adaptive Stacked Ciphers

ACL ARR 2026 January Submission4231 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: chain-of-thought, safety and alignment, red teaming, robustness, transfer, adversarial attacks
Abstract: Recently, Large Reasoning Models (LRMs) have demonstrated superior logical capabilities compared to traditional Large Language Models (LLMs), gaining significant attention. Despite their impressive performance, the potential for stronger reasoning abilities to introduce more severe security vulnerabilities, though pointed out by some previous works, remains largely underexplored. Existing jailbreak methods often struggle to balance effectiveness with robustness against adaptive safety mechanisms. In this work, we propose SEAL, a novel jailbreak attack that targets LRMs through an adaptive encryption pipeline designed to override their reasoning processes and evade potential adaptive alignment. Specifically, SEAL introduces a stacked encryption approach that combines multiple ciphers to overwhelm the model’s reasoning capabilities, effectively bypassing built-in safety mechanisms. To further prevent LRMs from developing countermeasures, we incorporate two dynamic strategies—random and adaptive—that adjust the cipher length, order, and combination. Extensive experiments on real-world reasoning models, including DeepSeek-R1, Claude Sonnet, and OpenAI GPT-o4-mini, validate the effectiveness of our approach. Notably, SEAL achieves an attack success rate of 85.6% on GPT o4-mini, outperforming state-of-the-art baselines by a significant margin of 17.2%. Warning: This paper contains examples of inappropriate, offensive, and harmful content
Paper Type: Long
Research Area: Safety and Alignment in LLMs
Research Area Keywords: chain-of-thought, safety and alignment, red teaming, robustness, transfer, adversarial attacks
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 4231
Loading