Steering Risk Preferences in Large Language Models by Aligning Behavioral and Neural Representations

ICLR 2026 Conference Submission20637 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: risky choices, steering, large language model, representation engineering, AI safety
TL;DR: We propose a self-alignment method to derive steering vectors by aligning behavioral and neural representations of risk.
Abstract: Changing the behavior of large language models (LLMs) can be as straightforward as editing the Transformer’s residual streams using appropriately constructed "steering vectors." These modifications to internal neural activations, a form of representation engineering, offer an effective and targeted means of influencing model behavior without retraining or fine-tuning the model. But how can such steering vectors be systematically identified? We propose a principled approach, which we call self-alignment, that uncovers steering vectors by aligning latent representations elicited through behavioral methods (specifically, Markov chain Monte Carlo with LLMs) with their neural counterparts. To evaluate this approach, we focus on extracting latent risk preferences from LLMs and steering their risk-related outputs using the aligned representations as steering vectors. We show that the resulting steering vectors successfully and reliably modulate LLM outputs in line with the targeted behavior.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 20637
Loading