Keywords: Reward Modeling, Large Language Models, RLHF
Abstract: Reward models are crucial for aligning large language models (LLMs) with human values and intentions.
Existing approaches follow either Generative (GRMs) or Discriminative (DRMs) paradigms, yet both suffer from limitations:
GRMs typically demand costly point-wise supervision, while DRMs produce uncalibrated relative scores that lack probabilistic interpretation.
To address these challenges, we introduce a novel reward modeling paradigm: Probabilistic Reward Model (PRM).
Instead of modeling reward as a deterministic scalar, our approach treats it as a random variable, learning a full probability distribution for the quality of each response.
To make this paradigm practical, we present its closed-form, discrete realization: the **Ordinal Probabilistic Reward Model** (OPRM), which discretizes the quality score into a finite set of ordinal ratings.
Building on OPRM, we propose a data-efficient training strategy called **Region Flooding Tuning** (RgFT).
It enables rewards to better reflect absolute text quality by incorporating quality-level annotations, which guide the model to concentrate the probability mass within corresponding rating sub-regions.
Experiments on various reward model benchmarks show that our method improves accuracy by **2.9% ~ 7.4%** compared to prior reward models, demonstrating strong performance and data efficiency.
Analysis of the score distribution provides evidence that our method captures not only relative rankings but also absolute quality.
Our models, data, and code will be released and open-sourced.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 16883
Loading