Reasoning without Training: Your Base Model is Smarter Than You Think

ICLR 2026 Conference Submission15818 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLMs, reasoning, MCMC, sampling, inference-time compute
TL;DR: We find a training-free sampling algorithm that achieves reasoning boosts on base models comparable to those obtained by RL techniques.
Abstract: Frontier reasoning models have exhibited incredible capabilities across a wide array of disciplines, driven by posttraining large language models (LLMs) with reinforcement learning (RL). However, despite the widespread success of this paradigm, much of the literature has been devoted to disentangling truly novel behaviors that emerge during RL but are not present in the base models. In our work, we approach this question from a different angle, instead asking whether comparable reasoning capabilites can be elicited from base models at inference time, *without any additional training*. Inspired by Markov chain Monte Carlo (MCMC) techniques for sampling from sharpened distributions, we propose a simple iterative sampling algorithm leveraging the base models' own likelihoods. Over different base models, we show that our algorithm offers boosts in reasoning that nearly match and even outperform those from RL on a wide variety of single-shot tasks, including MATH500, HumanEval, and GPQA. Crucially, our method does not require training, curated datasets, or a verifier, suggesting a general applicability beyond easily verifiable domains.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 15818
Loading