HPS: Hard Preference Sampling for Human Preference Alignment

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose Hard Preference Sampling (HPS), a novel framework for robust and efficient human preference alignment.
Abstract: Aligning Large Language Model (LLM) responses with human preferences is vital for building safe and controllable AI systems. While preference optimization methods based on Plackett-Luce (PL) and Bradley-Terry (BT) models have shown promise, they face challenges such as poor handling of harmful content, inefficient use of dispreferred responses, and, specifically for PL, high computational costs. To address these issues, we propose Hard Preference Sampling (HPS), a novel framework for robust and efficient human preference alignment. HPS introduces a training loss that prioritizes the most preferred response while rejecting all dispreferred and harmful ones. It emphasizes “hard” dispreferred responses — those closely resembling preferred ones — to enhance the model’s rejection capabilities. By leveraging a single-sample Monte Carlo sampling strategy, HPS reduces computational overhead while maintaining alignment quality. Theoretically, HPS improves sample efficiency over existing PL methods and maximizes the reward margin between preferred and dispreferred responses, ensuring clearer distinctions. Experiments on HH-RLHF and PKU-Safety datasets validate HPS’s effectiveness, achieving comparable BLEU and reward scores while greatly improving reward margins and thus reducing harmful content generation.
Lay Summary: Large language models (LLMs) are powerful tools, but they don’t always respond in ways that are safe, helpful, or aligned with human values. Sometimes, they generate content that’s misleading, inappropriate, or even harmful. This raises a crucial question: how can we reliably fine-tune LLMs to give preferred responses while avoiding dispreferred ones? To address this, we propose Hard Preference Sampling (HPS), a novel preference optimization framework for robust and efficient human preference alignment. HPS introduces a training loss that prioritizes the most preferred response while rejecting all dispreferred and harmful ones. It emphasizes “hard” dispreferred responses—those closely resembling preferred ones—to enhance the model’s rejection capabilities. By leveraging a streamlined sampling approach, HPS reduces computational overhead while maintaining alignment quality. Our findings have implications for how we fine-tune language models more effectively and efficiently to align with human preferences, highlighting that focusing on “hard” dispreferred responses is crucial for improving trustworthiness.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Link To Code: https://github.com/LVLab-SMU/HPS
Primary Area: Deep Learning->Large Language Models
Keywords: Alignment, Preference Optimization, RLHF, Large Language Models
Submission Number: 1462
Loading