Steering LLM Reasoning Through Bias-Only Adaptation

Published: 01 Jul 2025, Last Modified: 04 Jul 2025ICML 2025 R2-FM Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: knowledge tracing/discovering/inducing, parameter-efficient-training, chain-of-thought, reasoning
TL;DR: Steering vectors match full-tuning on reasoning tasks, indicating the model already possesses reasoning knowledge prior to RL training.
Abstract: Recent work on reasoning‐oriented language models, exemplified by \emph{o1-like} systems, suggests that reinforcement-learning (RL) finetuning does not create new capabilities but instead strengthens reasoning patterns already latent in the pretrained network. We test this claim by training \emph{steering vectors}: layer-wise biases that additively amplify selected hidden features while leaving all original weights unchanged. Experiments on four base models across the GSM8K and MATH benchmarks show that steering vectors recover, and in several cases exceed, the accuracy of fully-tuned counterparts. This result supports the view that the required reasoning skills pre-exist in the base model. Further, logit-lens analysis reveals that the trained vectors consistently boost token groups linked to structured languages and logical connectors, providing an interpretable account that aligns with the demands of quantitative reasoning tasks.
Submission Number: 38
Loading