Abstract: Controlling stylistic attributes in large language models (LLMs) remains challenging, with existing approaches relying on either prompt engineering or post-training alignment. This paper investigates this challenge through the lens of representation engineering, testing the hypothesis that fine-grained stylistic attributes—from emotional tone to linguistic structure—are encoded as linear directions in the model's activation space. We provide strong empirical evidence for this hypothesis across a wide range of styles and, based on this finding, present a lightweight, training-free method for precise style control. Our approach supports linear style composition, enhances safety by ablating undesirable behaviors, and, as confirmed by experiments on over a dozen models, achieves high style adherence while preserving core capabilities at minimal computational cost.
Paper Type: Long
Research Area: Efficient/Low-Resource Methods for NLP
Research Area Keywords: language models, controllable generation, vector editing, style control, model alignment
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English, French, Italian, Portuguese, German, Chinese, Japanese
Submission Number: 909
Loading