THINK DEEP, SPEAK ONCE: RELIT, A RECURSIVE LATENT IMPLICIT TRANSFORMER FRAMEWORK

Published: 02 Mar 2026, Last Modified: 18 Mar 2026LIT Workshop @ ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Track: long paper (up to 10 pages)
Keywords: Latent Reasoning, Recursive Transformers, Implicit Chain-of-Thought, Logical Reasoning
TL;DR: We introduce ReLIT, a framework that uses LLMs as a backbone to perform "deep thinking" by refining hidden reasoning vectors within recursive latent loops.
Abstract: \begin{abstract} \footnotetext[2]{Equal contribution.} Chain-of-Thought (CoT) prompting has become the dominant paradigm for eliciting reasoning in Large Language Models (LLMs), yet it creates substantial computational overhead by forcing models to externalize intermediate reasoning steps as discrete tokens. Recent latent reasoning approaches attempt to internalize this process within continuous hidden states. One of the latest advancements in the field of latent reasoning, Tiny Recursive Models (TRMs) excel at symbolic reasoning but struggle to preserve semantic coherence in natural language settings. To bridge this gap, we introduce \textbf{ReLIT} (\textbf{Re}cursive \textbf{L}atent \textbf{I}mplicit \textbf{T}ransformer), a hybrid framework that grounds deep recursive reasoning within the rich semantic representations of a foundational model. ReLIT augments a frozen LLM backbone (TinyLlama-1.1B) with a lightweight, trainable recursive block that iteratively refines its latent thinking ($z$) before committing to a final output, structurally solving linguistic intuition from algorithmic processing and enabling ``deep thinking'' via gradient-isolated recurrent loops without the latency of explicit token generation. Empirically, ReLIT achieves high parameter efficiency on the GLoRE logical reasoning benchmark, matching or outperforming significantly larger models on challenging tasks such as ProofWriter and RuleTaker despite minimal supervision. These results demonstrate that reasoning capability can be scaled efficiently through recurrent depth rather than parameter width, offering a principled framework for semantically grounded implicit reasoning. \end{abstract}
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 47
Loading