NegBLEURT Forest: Leveraging Inconsistencies to Detect Jailbreak attacks

ACL ARR 2025 May Submission4516 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: As Large Language Models (LLMs) continue to advance, ensuring their robustness and security remains a critical challenge. Jailbreak attacks pose a significant risk by coercing LLMs into generating harmful or ethically inappropriate content, even when such models are trained to follow strict guidelines. Furthermore, defining universal filtering policies is inherently context-dependent and difficult to generalize. To address these challenges without relying on additional rule-based filters or predefined thresholds, this paper presents a novel detection framework for assessing whether an LLM-generated response aligns with expected safe behavior. The proposed approach evaluates response consistency from two complementary perspectives: intra-consistency, which analyzes how reference responses in predefined Refusal Semantic Domain (RSD) vary in the latent space; and inter-consistency, which measures the semantic alignment between the LLM response and this RSD, followed by semi-supervised classification using Isolation Forest. This methodology enables effective jailbreak detection without the need for empirically defined thresholds, offering a more scalable and adaptable solution for real-world applications.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: Jailbreak Attack; Consistency Evaluation; Jailbreak Detection; Outlier Detection
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 4516
Loading