Keywords: Large Language Model, Chain-of-thought
TL;DR: We present RAVR, an end‑to‑end framework that leverages answer‑conditioned reasoning as a variational proxy for question‑only reasoning, enabling LLMs to recover high‑quality reasoning paths and learn more effectively on difficult problems.
Abstract: Reinforcement learning (RL) can refine the reasoning abilities of large language models (LLMs), but critically depends on a key prerequisite: the LLM can already generate high‑utility reasoning paths with reasonable probability. For tasks beyond the LLM’s current competence, such reasoning path can be hard to sample, and learning risks reinforcing familiar but suboptimal reasoning.
We are motivated by the insight from cognitive science that *Why is this the answer?* is often an easier question than *What is the answer?*, as it avoids the heavy cognitive load of open-ended exploration, opting instead for explanatory reconstruction—systematically retracing the reasoning that links a question to its answer.
We show that LLMs can similarly leverage answers to derive high-quality reasoning paths.
We formalize this phenomenon and prove that conditioning on answer provably increases the expected utility of sampled reasoning paths, thereby transforming intractable problems into learnable ones. Building on this insight, we introduce RAVR (Reference-Answer-guided Variational Reasoning), an end-to-end framework that uses answer-conditioned reasoning as a variational surrogate for question-only reasoning. Experiments in both general and math domains demonstrate consistent improvements over strong baselines. We further analyze the reasoning behavior and find that RAVR reduces hesitation, strengthens conclusion consolidation, and promotes problem-specific strategies in reasoning.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 23585
Loading