Strategic Scaling of Test-Time Compute: A Bandit Learning Approach

ICLR 2026 Conference Submission22363 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Test-time scaling, bandit learning, large language models, pure exploration
Abstract: Scaling test-time compute has emerged as an effective strategy for improving the performance of large language models. However, existing methods typically allocate compute uniformly across all queries, overlooking variation in query difficulty. To address this inefficiency, we formulate test-time compute allocation as a novel bandit learning problem and propose adaptive algorithms that estimate query difficulty on the fly and allocate compute accordingly. Compared to uniform allocation, our algorithms allocate more compute to challenging queries while maintaining accuracy on easier ones. Among challenging queries, our algorithms further learn to prioritize solvable instances, effectively reducing excessive computing on unsolvable queries. We theoretically prove that our algorithms achieve better compute efficiency than uniform allocation and empirically validate their effectiveness on math and code benchmarks. Specifically, our algorithms achieve up to an 11.10\% performance improvement (15.04\% relative) on the MATH-500 dataset, up to 10.82\% (14.44\% relative) on the AIME25 dataset, and up to an 11.23\% performance improvement (15.29\% relative) on the LiveCodeBench dataset.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 22363
Loading