Pref-CTRL: Preference Driven LLM Alignment using Representation Editing

ACL ARR 2026 January Submission7880 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Language Modeling, Test-time Alignment, Preference Optimization, Representation Editing
Abstract: Test-time alignment methods offer a promising alternative to fine-tuning by steering the outputs of large language models (LLMs) at inference time with lightweight interventions on their internal representations. Recently, a prominent and effective approach, RE-Control (Kong et al., 2024) has proposed leveraging an external value function trained over the LLM's hidden states to guide generation via gradient-based editing. While effective, this method overlooks a key characteristic of alignment tasks, i.e. that they are typically formulated as learning from human preferences between candidate responses. To address this, in this paper we propose a novel preference-based training framework, **Pref-CTRL**, that uses a multi-objective value function to better reflect the structure of preference data. Our approach has outperformed RE-Control on two benchmark datasets and showed greater generalization on an out-of-domain dataset, PKU-SafeRLHF. Our source code is available at https://anonymous.4open.science/r/prefctrl.
Paper Type: Short
Research Area: Natural Language Generation
Research Area Keywords: text-to-text generation, inference methods, model architectures
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English
Submission Number: 7880
Loading