Abstract: The alignment of large language models (LLMs) with human preferences remains a key challenge. While post-training techniques like Reinforcement Learning from Human Feedback (RLHF) and Direct Preference Optimization (DPO) have achieved notable success, they often experience computational inefficiencies and training instability. In this paper, we propose \textbf{F}eature-level constrained \textbf{P}reference \textbf{O}ptimization (FPO), a novel method designed to simplify the alignment process while ensuring stability. FPO leverages pre-trained Sparse Autoencoders (SAEs) and introduces feature-level constraints, allowing for efficient, sparsity-enforced alignment. Our approach enjoys efficiency by using sparse features activated in a well-trained sparse autoencoder and the quality of sequential KL divergence by using the feature-level offline reference. Experimental results on benchmark datasets demonstrate that FPO achieves an above 5\% absolute improvement in win rate with much lower computational cost compared to state-of-the-art baselines, making it a promising solution for efficient and controllable LLM alignments.
Lay Summary: Problem: Large Language Models, the AI behind chatbots, need "alignment" to ensure their answers are helpful and safe, matching human preferences. However, current alignment methods can be slow, expensive, and sometimes unstable during training. It's difficult to make them both efficient and precisely controllable.
Solution: We introduce Feature-level Preference Optimization (FPO), a new method using Sparse Autoencoders (SAEs). SAEs help us understand the core concepts or "features" the model uses. FPO guides the model at this deeper feature level, rather than just the words (tokens) it produces. It also pre-calculates data, making training faster and less memory-intensive.
Impact: FPO makes LLM alignment more efficient and stable. Our tests show it outperforms top methods while using fewer resources. Importantly, it allows fine-grained control, letting us adjust specific AI behaviors—like improving safety or managing its use of different languages—without harming overall performance. This makes creating well-behaved and reliable AI more practical.
Primary Area: Social Aspects->Accountability, Transparency, and Interpretability
Keywords: Alignment, SAE, LLM
Submission Number: 8525
Loading