Adversarial Manipulation of Reasoning Models using Internal Representations

Published: 01 Jul 2025, Last Modified: 04 Jul 2025ICML 2025 R2-FM Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: adversarial robustness, AI safety, reasoning models, jailbreaking, activation steering, mechanistic interpretability, chain-of-thought, large language models, representation engineering, model safety
TL;DR: We identify a "cautious" direction in a reasoning model's chain-of-thought that controls safety decisions, and show how manipulating it enables effective jailbreak attacks that bypass refusal mechanisms.
Abstract: Reasoning models generate chain-of-thought (CoT) tokens before their final output, but how this affects their vulnerability to jailbreak attacks remains unclear. While traditional language models make refusal decisions at the prompt-response boundary, we find evidence that DeepSeek-R1-Distill-Llama-8B makes these decisions within its CoT generation. We identify a linear direction in activation space during CoT token generation that predicts whether the model will refuse or comply - termed the "caution" direction because it corresponds to cautious reasoning patterns in the generated text. Ablating this direction from model activations increases harmful compliance, effectively jailbreaking the model. We additionally show that intervening only on CoT token activations suffices to control final outputs, and that incorporating this direction into prompt-based attacks improves success rates. Our findings suggest that the chain-of-thought itself is a promising new target for adversarial manipulation in reasoning models.
Submission Number: 132
Loading