Keywords: llm evaluation, reasoning model, slow thinking, judge model, reward model
Abstract: With the release of OpenAI’s o1 model, reasoning models that adopt slow-thinking strategies have become increasingly common. Their outputs often contain complex reasoning, intermediate steps, and self-reflection, making existing evaluation methods and reward models inadequate. In particular, they struggle to judge answer equivalence and to reliably extract final answers from long, complex responses. To address this challenge, we propose xVerify, an efficient answer verifier for evaluating reasoning models. xVerify shows strong equivalence judgment capabilities, enabling accurate comparison between model outputs and reference answers across diverse question types. To train and evaluate xVerify, we construct the VAR dataset, consisting of multi-round annotated QA pairs generated by multiple LLMs across challenging reasoning tasks. Experimental results on both test and generalization sets show that all xVerify variants achieve over 95\% F1 score and accuracy. Notably, the smallest model, xVerify-0.5B-I, outperforms all evaluation methods except GPT-4o, while xVerify-3B-Ib surpasses GPT-4o in overall performance. In addition, reinforcement learning experiments using xVerify as the reward model yield an 18.4\% improvement for Qwen2.5-7B compared with direct generation, exceeding the gains achieved with Math Verify as the reward. These results demonstrate the effectiveness and generalizability of xVerify. All xVerify resources are available on [GitHub](https://anonymous.4open.science/r/xVerify-5702).
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: benchmarking,evaluation methodologies,automatic evaluation of datasets,evaluation,metrics,NLP datasets
Contribution Types: NLP engineering experiment, Approaches to low-resource settings, Data resources, Data analysis
Languages Studied: English,Chinese
Submission Number: 467
Loading