An Interpretable N-gram Perplexity Threat Model for Large Language Model Jailbreaks

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Jailbreaking attacks are not comparable - we propose a way to do so via a realistic threat model and show, how to adapt popular attacks to it.
Abstract: A plethora of jailbreaking attacks have been proposed to obtain harmful responses from safety-tuned LLMs. These methods largely succeed in coercing the target output in their original settings, but their attacks vary substantially in fluency and computational effort. In this work, we propose a unified threat model for the principled comparison of these methods. Our threat model checks if a given jailbreak is likely to occur in the distribution of text. For this, we build an N-gram language model on 1T tokens, which, unlike model-based perplexity, allows for an LLM-agnostic, nonparametric, and inherently interpretable evaluation. We adapt popular attacks to this threat model, and, for the first time, benchmark these attacks on equal footing with it. After an extensive comparison, we find attack success rates against safety-tuned modern models to be lower than previously presented and that attacks based on discrete optimization significantly outperform recent LLM-based attacks. Being inherently interpretable, our threat model allows for a comprehensive analysis and comparison of jailbreak attacks. We find that effective attacks exploit and abuse infrequent bigrams, either selecting the ones absent from real-world text or rare ones, e.g., specific to Reddit or code datasets.
Lay Summary: Jailbreaking attacks are techniques that trick a language model into ignoring safety rules and answering disallowed questions such as "How do I make a bomb?". However, such attacks are not easily comparable, slowing the development of defenses against them. We propose tools to compare different attack types and allow for additional insights, making attacks and models more interpretable. We observe that effective attacks exploit infrequent words, either picking those absent from ordinary text or choosing rare, domain-specific ones (e.g., from Reddit or code).
Link To Code: https://github.com/valentyn1boreiko/llm-threat-model
Primary Area: Deep Learning->Large Language Models
Keywords: LLM jailbreaks, N-gram, threat model, perplexity filter
Submission Number: 3287
Loading