Mitigating Jailbreaks with Intent-Aware LLMs

04 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: AI Safety, Jailbreak, LLM
TL;DR: This work proposes a post supervised fine-tuning method that tunes a safety-aligned LLM at reasoning about the intentions of an instruction before responding, resulting in significant improvement in defense against adversarial jailbreak attacks.
Abstract: Despite extensive safety-tuning, large language models (LLMs) remain vulnerable to jailbreak attacks via adversarially crafted instructions, reflecting a persistent trade-off between safety and task performance. In this work, we propose \textsc{Intent-FT}, a simple and lightweight fine-tuning approach that explicitly trains LLMs to infer the underlying intent of an instruction before responding. By fine-tuning on a targeted set of adversarial instructions, \textsc{Intent-FT} enables LLMs to generalize intent deduction to unseen attacks, thereby substantially improving their robustness. We comprehensively evaluate both parametric and non-parametric attacks across open-source and proprietary models, considering harmfulness from attacks, utility, over-refusal, and impact against white-box threats. Empirically, \textsc{Intent-FT} consistently mitigates all evaluated attack categories, with no single attack exceeding a $50\%$ success rate—whereas existing defenses remain only partially effective. Importantly, our method preserves the model's general capabilities and reduces excessive refusals on benign instructions containing superficially harmful keywords. Furthermore, models trained with \textsc{Intent-FT} accurately identify hidden harmful intent in adversarial attacks, and these learned intentions can be effectively transferred to enhance vanilla model defenses.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 1882
Loading