Good Reasoning Makes Good Demonstrations: Implicit Reasoning Quality Supervision via In-Context Reinforcement Learning
Keywords: Reinforcement Learning, Mathematical Reasoning, Large Language Models
Abstract: Reinforcement Learning with Verifiable Rewards (RLVR) improves reasoning in large language models but treats all correct solutions equally, potentially reinforcing flawed traces that get correct answers by chance. We observe that \emph{better reasoning are better teachers}: high-quality solutions serve as more effective demonstrations than low-quality ones. We term this teaching ability \textbf{Demonstration Utility}, and show that the policy model's own in-context learning ability provides an efficient way to measure it, yielding a quality signal termed \textbf{Evidence Gain}. To employ this signal during training, we introduce \textbf{In-Context RLVR}. By Bayesian analysis, we show that this objective implicitly reweights rewards by Evidence Gain, assigning higher weights to high-quality traces and lower weights to low-quality ones, without requiring costly computation or external evaluators. Experiments on mathematical benchmarks show improvements in both accuracy and reasoning quality over standard RLVR.
Paper Type: Short
Research Area: Machine Learning for NLP
Research Area Keywords: Reinforcement Learning
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 8705
Loading