Keywords: Language Models, Variational Reasoning, Reinforcement Learning
TL;DR: We propose a variational reasoning framework that treats thinking traces as latent variables optimized via variational inference, yielding a principled and stable training objective that improves LLM reasoning across diverse benchmarks.
Abstract: We introduce a **variational reasoning** framework for language models that treats thinking traces as latent variables and optimizes them through variational inference. Starting from the evidence lower bound (ELBO), we extend it to a multi-trace objective for tighter bounds and propose a forward-KL formulation that stabilizes the training of the variational posterior. We further show that rejection sampling finetuning and binary-reward RL, including GRPO, can be interpreted as local forward-KL objectives, where *an implicit weighting by model accuracy* naturally arises from the derivation and reveals a previously unnoticed bias toward easier questions. We empirically validate our method on the Qwen 2.5 and Qwen 3 model families across a wide range of reasoning tasks. Overall, our work provides a principled probabilistic perspective that unifies variational inference with RL-style methods and yields stable objectives for improving the reasoning ability of language models.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 8347
Loading