MPO: An Efficient Post-Processing Framework for Mixing Diverse Preference Alignment

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Reinforcement Learning from Human Feedback (RLHF) has shown promise in aligning large language models (LLMs). Yet its reliance on a singular reward model often overlooks the diversity of human preferences. Recent approaches address this limitation by leveraging multi-dimensional feedback to fine-tune corresponding reward models and train LLMs using reinforcement learning. However, the process is costly and unstable, especially given the competing and heterogeneous nature of human preferences. In this paper, we propose Mixing Preference Optimization (MPO), a post-processing framework for aggregating single-objective policies as an alternative to both multi-objective RLHF (MORLHF) and MaxMin-RLHF. MPO avoids alignment from scratch. Instead, it log-linearly combines existing policies into a unified one with the weight of each policy computed via a batch stochastic mirror descent. Empirical results demonstrate that MPO achieves balanced performance across diverse preferences, outperforming or matching existing models with significantly reduced computational costs.
Lay Summary: Today's AI tools, such as ChatGPT, are usually trained to follow what people want - but most rely on just one idea of what 'good' looks like. That’s a problem because people care about different things: some want answers to be helpful, others want them to be funny, and some want them to be safe. Training a new AI for all kinds of preference takes a lot of time and computing power. Instead, we created a method called Mixing Preference Optimization (MPO). It takes a few existing AIs, each tuned to a different goal, and combines their decisions into one balanced model with provable guarantees. Rather than picking a single answer or switching between models, MPO blends their outputs in a stable and thoughtful way — like averaging expert advice to reach a fair decision. This creates an AI that better reflects diverse human preferences, while using less time and resources than existing methods.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Primary Area: Deep Learning->Large Language Models
Keywords: Direct Preference Optimization, Large Language Models, RLHF
Submission Number: 6828
Loading