Keywords: verification, test-time scaling, discriminative verification, best-of-n sampling, budget-aware
TL;DR: Hybrid discriminative verification combines verifier scores with self-consistency consensus, matching or surpassing generative verifiers under practical compute budgets for efficient test-time scaling of LLM reasoning.
Abstract: Test-time scaling is a powerful strategy for boosting the performance of large language models on complex reasoning tasks. While state-of-the-art approaches often employ generative verifiers to select the best solution from a pool of candidates, this method incurs prohibitive computational costs, limiting its practicality. In this work, we pivot the focus to a more budget-aware paradigm: discriminative verification. We conduct a thorough empirical analysis and demonstrate that while discriminative verifiers may underperform in isolation, combining them with self-consistency in a hybrid approach creates a powerful and efficient selection mechanism. These hybrid methods consistently outperform self-consistency with negligible computational overhead (e.g., less than 2\% on AIME2025). More importantly, under a fixed compute budget, our approach surpasses state-of-the-art generative verification by a significant margin: achieving up to 6.1\% higher accuracy on AIME2025. Our findings establish that for practical, real-world applications, budget-aware scaling with discriminative verifiers is not only a "free" upgrade over self-consistency, but also a more effective and efficient alternative to costly generative techniques. Code is available at https://anonymous.4open.science/r/Verification-ICLR2026.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 15169
Loading