Leveraging Sparsity for Sample-Efficient Preference Learning: A Theoretical Perspective

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: This paper considers the sample-efficiency of preference learning, which models and predicts human choices based on comparative judgments. The minimax optimal estimation error rate $\Theta(d/n)$ in classical estimation theory requires that the number of samples $n$ scales linearly with the dimensionality of the feature space $d$. However, the high dimensionality of the feature space and the high cost of collecting human-annotated data challenge the efficiency of traditional estimation methods. To remedy this, we leverage sparsity in the preference model and establish sharp error rates. We show that under the sparse random utility model, where the parameter of the reward function is $k$-sparse, the minimax optimal rate can be reduced to $\Theta(k/n \log(d/k))$. Furthermore, we analyze the $\ell_{1}$-regularized estimator and show that it achieves near-optimal rate under mild assumptions on the Gram matrix. Experiments on synthetic data and LLM alignment data validate our theoretical findings, showing that sparsity-aware methods significantly reduce sample complexity and improve prediction accuracy.
Lay Summary: Training AI to follow human preferences often needs lots of data, which is costly and slow. We noticed that people usually care about only a few key factors when making choices. By focusing on this idea, we developed a method that learns much faster by ignoring unimportant details. This helps build AI systems—like helpful chatbots—that better understand what people want, using less data.
Link To Code: https://github.com/yaoyzh/SparsePreferenceLearning
Primary Area: Social Aspects->Alignment
Keywords: preference learning, RLHF, sparsity, statistical estimation, reward modeling, sample efficiency
Submission Number: 2488
Loading