Abstract: Large language models (LLMs) enhance security through alignment when widely used, but remain susceptible to jailbreak attacks capable of producing inappropriate content. Jailbreak detection methods show promise in mitigating jailbreak attacks through the assistance of other models or multiple model inferences. However, existing methods entail significant inference-time computational costs. In this paper, we first present a finding that the difference in output distributions between jailbreak and benign prompts can be employed for detecting jailbreak prompts. Based on this finding, we propose a Free Jailbreak Detection (FJD) method which prepends an affirmative instruction to the input and scales the logits by temperature to distinguish between jailbreak and benign prompts through the confidence of the first token. Furthermore, we enhance the detection performance of FJD through the integration of virtual instruction learning. Extensive experiments on aligned large models show that our FJD can effectively detect jailbroken samples with almost no additional computational costs.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: Jailbreak Attack, Large Language Model
Languages Studied: English
Submission Number: 968
Loading