The Attacker Moves Second: Stronger Adaptive Attacks Bypass Defenses Against LLM Jailbreaks and Prompt Injections
Keywords: prompt injection defense, adaptive evaluation, jailbreaks, adversarial examples
TL;DR: Jailbreak and prompt injection defenses must stop evaluating with datasets of static harmful strings and fixed attack algorithms; by evaluating against "adaptive attacks", we break 12 defenses with simple attack techniques.
Abstract: How should we evaluate the robustness of language model defenses? Current defenses against jailbreaks and prompt injections (which aim to prevent an attacker from eliciting harmful knowledge or remotely triggering malicious actions, respectively) are typically evaluated either against a *static* set of harmful attack strings, or against *computationally weak optimization methods* that were not designed with
the defense in mind. We argue that this evaluation process is flawed.
Instead, we should evaluate defenses against *adaptive attackers* who explicitly modify their attack strategy to counter a defense's design while spending *considerable resources* to optimize their objective. By systematically tuning and scaling general optimization techniques—gradient descent, reinforcement learning, random search, and human-guided exploration—we bypass 12 recent defenses (based on a diverse set of techniques) with attack success rate above 90% for most; importantly, the majority of defenses originally reported near-zero attack success rates. We believe that future defense work must consider stronger attacks, such as the ones we describe, in order to make reliable and convincing claims of robustness.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 12994
Loading