Keywords: LLM as a Judge, Reasoning, Large Language Models
TL;DR: JudgeLRM introduces a family of RL-trained LLMs optimized for judgment tasks, demonstrating superior performance over SFT baselines and models like GPT-4 and DeepSeek-R1 by enhancing evaluative reasoning in reasoning-intensive scenarios.
Abstract: Large Language Models (LLMs) are increasingly adopted as evaluators, offering a scalable alternative to human annotation. However, existing supervised fine-tuning (SFT) approaches often fall short in domains that demand complex reasoning. Judgment is inherently reasoning-intensive: beyond surface-level scoring, it requires verifying evidence, identifying errors, and justifying decisions. Through the analysis of evaluation tasks, we find a negative correlation between SFT performance gains and the proportion of reasoning-demanding samples, revealing the limits of SFT in such scenarios. To address this, we introduce JudgeLRM, a family of judgment-oriented LLMs, trained using reinforcement learning (RL) with judge-wise, outcome-driven rewards to activate reasoning capabilities. JudgeLRM consistently outperform SFT-tuned baselines in the same size, as well as other RL and SFT variants, and even surpass state-of-the-art reasoning models: notably, JudgeLRM-3B/4B exceeds GPT-4, while JudgeLRM-7B/8B/14B outperforms DeepSeek-R1 by over 2\% in F1 score, with particularly strong gains on reasoning-heavy tasks. Our findings underscore the value of RL in unlocking reasoning-aligned LLM judges. The code is available at \url{https://anonymous.4open.science/r/JudgeLRM-D1C4/}.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 1068
Loading