Steering at the Source: Style Modulation Heads for Robust Persona Control
Track: long paper (up to 10 pages)
Domain: machine learning
Abstract: Activation steering offers a computationally efficient mechanism for controlling Large Language Models (LLMs) without fine-tuning.
While effectively controlling target traits (e.g., persona), coherency degradation remains a major obstacle to safety and practical deployment.
We hypothesize that this degradation stems from intervening on the residual stream, which indiscriminately affects aggregated features and inadvertently amplifies off-target noise.
In this work, we identify a sparse subset of attention heads (only three heads) that independently govern persona and style formation, which we term *Style Modulation Heads*.
Specifically, these heads can be localized via geometric analysis of internal representations, combining layer-wise cosine similarity and head-wise contribution scores.
We demonstrate that intervention targeting only these specific heads achieves robust behavioral control while significantly mitigating the coherency degradation observed in residual stream steering.
More broadly, our findings show that precise, component-level localization enables safer and more precise model control.
Presenter: ~Yoshihiro_Izawa1
Submission Number: 13
Loading