Keywords: Large Language Models, Preference Alignment, Nash Equilibrium, Nash Learning from Human Feedback
Abstract: Nash Learning from Human Feedback (NLHF) is a game-theoretic framework for aligning large language models (LLMs) with human preferences by modeling learning as a two-player zero-sum game. When the payoff is defined by the true underlying preference, the framework guarantees desirable alignment properties. However, the ground-truth preference matrix is often unavailable in practice due to limited or noisy data, which substantially constrains the effectiveness of this game-theoretic approach to LLM alignment. In this paper, we systematically study what payoff based on the pairwise human preferences can yield desirable alignment properties.
We establish necessary and sufficient conditions for Condorcet consistency, diversity through mixed strategies, and Smith consistency.
These results provide a theoretical foundation for the robustness of game-theoretic LLM alignment.
Further, we show the impossibility of preference matching, i.e., no smooth and learnable mappings of pairwise preferences can guarantee a unique Nash equilibrium that matches a target policy, even under standard assumptions like the Bradley-Terry-Luce model.
This result highlights a fundamental limitation of game-theoretic LLM alignment.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 21564
Loading