Latent Chain-of-Thought? Decoding the Depth-Recurrent Transformer

Published: 24 Jul 2025, Last Modified: 04 Oct 2025XLLM-Reason-PlanEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Recurrent Transformer, Mechanistic Interpretability, Chain-of-Thought, Latent Reasoning
TL;DR: We probe the depth-recurrent transformer Huginn-3.5B using lenses to test for latent CoT reasoning on math tasks, but find no clear evidence: hidden state semantics are layerwise inconsistent, and deeper recurrence yields only marginal gains.
Abstract: Chain-of-thought (CoT) reasoning has enabled transformer-based language models to excel at complex mathematics and multi-step planning. However, in standard decoder-only architectures, these reasoning steps are externalized in natural language, improving interpretability at the cost of efficiency. To capture reasoning that is not easily represented in words, many works have explored recurrent architectures that aim to internalize reasoning in latent space, potentially supporting latent CoT. In this paper, we investigate whether such reasoning structures emerge in Huginn-3.5B, a depth-recurrent Transformer that reuses layers at inference time without increasing parameter count. We examine the model’s internal behavior on arithmetic tasks using a suite of probing techniques including the Logit Lens and Coda Lens. Our findings reveal limited evidence of interpretable latent CoT by tracking rank trajectories of final and intermediate result tokens. Furthermore, we uncover significant probing inconsistencies across recurrent blocks, where the interpretability of hidden states depends heavily on both the layer index and the decoding method. Finally, we empirically show that increasing recurrence depth yields only marginal gains and falls well short of models that explicitly externalize reasoning steps.
Paper Published: No
Paper Category: Short Paper
Supplementary Material: zip
Demography: Prefer not to say
Academic: Masters Student
Submission Number: 12
Loading