Toward a Dynamic Stackelberg Game-Theoretic Framework for Agentic AI Defense Against LLM Jailbreaking
Keywords: Game Theory, Jailbreaking, RRT, Prompt–LLM Interaction
Abstract: This paper proposes a game-theoretic framework that models the interaction
between prompt engineers and large language models (LLMs) as a two-player
extensive-form game coupled with a Rapidly-exploring Random Trees (RRT)
search over prompt space. The attacker incrementally samples, extends, and
tests prompts, while the LLM chooses to accept, reject, or redirect, leading
to terminal outcomes of Safe Interaction, Blocked, or Jailbreak. Embedding
RRT exploration inside the extensive-form game captures both the discovery
phase of jailbreak strategies and the strategic responses of the model.
Furthermore, we show that the defender’s behavior can be interpreted through
a local Stackelberg equilibrium condition, which explains when the attacker
can no longer obtain profitable prompt deviations and provides a theoretical
lens for understanding the effectiveness of our Purple Agent defense. The
resulting game tree thus offers a principled foundation for evaluating,
interpreting, and hardening LLM guardrails.
Track: Long Paper
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 23
Loading