Reward Yourself: Efficient Self Rewards for Trustworthy Sampling

ACL ARR 2026 January Submission3783 Authors

04 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reward, Sampling, LLM, Trustworthy
Abstract: As high-quality data becomes harder to obtain, reward models are increasingly important. Beyond the costly RLHF stage, they are now used at inference time to guide LLM generation and in data selection for post-training. These methods bring efficiency and performance gains, but current reward models often fail to prevent untrustworthy behaviors such as privacy leaks and stereotypes. Re-training reward models to address these issues is expensive, since it requires large-scale human preference data. We propose SelfRW, a lightweight intrinsic reward that needs no extra fine-tuning or auxiliary models. By pruning current LLMs to approximate an “trust” and an “untrust” token distribution, we compute the log-probability difference as an auxiliary reward. When integrated into reward-guided sampling, SelfRW significantly reduces untrustworthy outputs while preserving task performance. It also improves reward-guided data selection, yielding better post-trained models. Experiments with two reward models and four LLMs on privacy, bias, and stereotype benchmarks show that combining SelfRW consistently improves trustworthiness (over 10\% in privacy tasks and 20\% in bias tasks) with minimal impact on general utility benchmarks.
Paper Type: Long
Research Area: Safety and Alignment in LLMs
Research Area Keywords: Language Modeling
Contribution Types: Model analysis & interpretability, Approaches to low-resource settings
Languages Studied: English
Submission Number: 3783
Loading