COLD-Steer: Steering Large Language Models via In-Context One-step Learning Dynamics

ICLR 2026 Conference Submission21034 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Steerable Generation, Large language models, Representation Engineering, Test-time Intervention, Learning Dynamics
TL;DR: We introduce COLD-Steer, an optimization-free, sample-efficient activation steering framework that leverages the in-Context One-step Learning Dynamics of given examples to steer LLM behavior during inference.
Abstract: Activation steering methods enable inference-time control of large language model (LLM) behavior without retraining, but current approaches either capture suboptimally steering signals from labeled examples or require hundreds to thousands of examples to optimize using specific procedures for each behavioral target. We introduce COLD-Steer, a training-free framework that steers LLM activations by approximating the representational changes that would result from gradient descent on in-context examples. Our key insight is that the effect of fine-tuning on a small set of examples can be efficiently approximated at inference time without actual parameter updates. We formalize this through two complementary approaches: (i) a unit kernel approximation method that updates the activations directly using gradients with respect to them, normalized across examples, and (ii) a finite-difference approximation requiring only two forward passes regardless of example count. Experiments across a variety of steering tasks and benchmarks demonstrate that COLD-Steer achieves upto 95\% steering effectiveness while using 50 times fewer samples compared to the best baseline. COLD-Steer enables real-time adaptation to new steering objectives and facilitates accommodating diverse perspectives without extensive demonstration data, which we validate through our experiments on pluralistic alignment tasks. Our framework opens new possibilities for adaptive, context-aware model control that can flexibly address varying loss-driven human preferences through principled approximation of learning dynamics rather than specialized training procedures.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 21034
Loading