Keywords: Large Reasoning Models, Thinking Rubrics, Reinforcement Learning
Abstract: Large Reasoning Models (LRMs) benefit from generating intermediate reasoning steps, enabling more reliable and interpretable decision-making. While outcome-based supervision has proven effective for LRMs across diverse tasks, it focuses solely on final answers and cannot guarantee high-quality intermediate reasoning. In contrast, existing process supervision is largely limited to verifiable domains such as mathematics or code, where intermediate steps can be explicitly checked, restricting its applicability to open-ended reasoning tasks. To address these limitations, we propose Rubrics-in-Thinking Reinforcement Learning (RiT), the first framework to introduce thinking-rubric supervision into intermediate reasoning. RiT automatically generates fine-grained rubrics and integrates them into a reward function via gated fusion with outcome-based rewards, guiding models to reason in a coherent and task-aligned manner, improving both intermediate steps and the final response. Experiments on reasoning-intensive and open-ended benchmarks demonstrate that RiT consistently outperforms outcome-only RL baselines.
Paper Type: Long
Research Area: Language Models
Research Area Keywords: chain-of-thought,fine-tuning,LLM/AI agents
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English,Chinese
Submission Number: 5675
Loading