Latent & Implicit Thinking – Going Beyond CoT Reasoning

Published: 24 Dec 2025, Last Modified: 30 Dec 2025ICLR 2026 Workshop ProposalsEveryoneRevisionsBibTeXCC BY 4.0
Keywords: latent reasoning, implicit reasoning, neural networks
TL;DR: Our workshop unites researchers exploring implicit, latent, and non-autoregressive reasoning in neural networks, aiming to move beyond explicit chain-of-thought toward more efficient and expressive forms of reasoning within hidden representations.
Abstract: Recent advances in AI have revealed that explicit Chain-of-Thought (CoT) reasoning—where models verbalize intermediate reasoning steps—while powerful, is not the only or most efficient form of reasoning. The emerging paradigm of latent and implicit thinking explores how models can reason within their hidden representations or parameter space, using continuous latent states, recurrent or looped architectures, and non-autoregressive formulations such as diffusion or search-based models. This workshop, Latent & Implicit Thinking: Going Beyond CoT Reasoning (LIT), aims to unify these growing research efforts across difference areas. It will feature discussions on latent-space reasoning tokens, looped and recurrent architectures, latent generative paradigms, and theoretical insights on the nature of latent reasoning depth and efficiency. By bringing together experts from academia and industry, LIT will provide a forum for deep technical exchange and cross-disciplinary collaboration, fostering a new shared framework for understanding and enhancing reasoning in the latent space of neural networks.
Submission Number: 51
Loading