UniARM: Towards a Unified Autoregressive Reward Model for Multi-Objective Test-Time Alignment

06 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Test-time Alignment, Reward Model, Multi-Objective Alignment
Abstract: Multi-objective alignment aims to align LLM responses with multiple human preference objectives. Among existing methods, guiding the generation of frozen LLMs through autoregressive reward models (ARMs) to accomplish multi-objective test-time alignment is a low-cost solution. However, these methods typically rely on independent parameters for each preference objective, either by training ARMs independently across preference dimensions, which neglects interactions among preference features, or by training a single ARM with separate feature extraction modules for each preference, which can cause feature entanglement. Both strategies can result in misalignment between generated outputs and user preferences. To address this limitation, we propose Preference-Modulated \& Shared Low-Rank Adaptation (MoSLoRA) for ARM training, which first extracts shared features via a preference-agnostic module and then applies affine transformations to shared features via a preference modulation module conditioned on mixed preference vectors. This design mitigates feature entanglement and enables precise control over preference trade-offs during inference. Building on this, we introduce the Unified Autoregressive Reward Model (UniARM), a novel framework for multi-objective test-time alignment. UniARM jointly models all preference dimensions in a single parameter space, eliminating the need for independent parameters for each preference objective. Experimental results show that UniARM improves HV and MIP by 18.5\% and 30.2\% in the safety alignment task. It also enables weak-to-strong guidance, where a smaller UniARM guides a larger frozen LLM, yielding HV and MIP improvements of 9.1\% and 6.8\% in the safety alignment task, and 5.4\% and 10.7\% in the assistant task. Notably, these gains are achieved without introducing additional parameters or increasing inference latency.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 2534
Loading