LaDiR: Latent Diffusion Enhances LLMs for Text Reasoning

Published: 26 Jan 2026, Last Modified: 11 Feb 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Reasoning, Diffusion Models, Latent Reasoning
TL;DR: LaDiR is a novel latent reasoning framework that encodes latent “thought tokens” with a VAE and predicts them via latent diffusion models, enabling adaptive test-time compute, parallel diverse generation, and better intepretability.
Abstract: Large Language Models (LLMs) demonstrate their reasoning ability through chain-of-thought (CoT) generation. However, LLM's autoregressive decoding may limit the ability to revisit and refine earlier tokens in a holistic manner, which can also lead to inefficient exploration for diverse solutions. In this paper, we propose \textit{LaDiR} (\textbf{La}tent \textbf{Di}ffusion \textbf{R}easoner), a novel reasoning framework that unifies the expressiveness of continuous latent representation with the iterative refinement capabilities of latent diffusion models while operating effectively without large-scale pretraining. We first construct a structured latent reasoning space using a Variational Autoencoder (VAE) that encodes text reasoning steps into blocks of thought tokens, preserving semantic information and interpretability while offering compact but expressive representations. Subsequently, we utilize a latent diffusion model that learns to denoise a block of latent \textit{thought tokens} with a blockwise bidirectional attention mask, enabling longer horizon and iterative refinement with adaptive test-time compute. This design allows efficient parallel generation of diverse reasoning trajectories, allowing the model to plan and revise the reasoning process holistically. We conduct evaluations on a suite of mathematical reasoning and planning benchmarks. Empirical results show that LaDiR consistently improves accuracy, diversity, and interpretability over existing autoregressive, diffusion-based, and latent reasoning methods.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 23007
Loading