Lightweight Latent Reasoning for Narrative Tasks

Published: 02 Mar 2026, Last Modified: 18 Mar 2026LIT Workshop @ ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Track: long paper (up to 10 pages)
Keywords: latent reasoning, narrative, story generation, reinforcement learning
TL;DR: We propose LiteReason, a method to interleave latent reasoning and discrete sampling designed for reinforcement learning on low-resource narrative tasks.
Abstract: Large language models (LLMs) tackle complex tasks by generating long chains of thought or ``reasoning traces'' that act as latent variables in the generation of an output given a query. A model's ability to generate such traces can be optimized with reinforcement learning (RL) to improve their utility in predicting an answer. This optimization comes at a high computational cost, especially for narrative-related tasks that involve retrieving and processing many tokens. To this end, we propose LiteReason, a latent reasoning method that can be interleaved with standard token sampling and easily combined with RL techniques. LiteReason employs a lightweight Reasoning Projector module, trained to produce continuous latent tokens that help the model `skip' reasoning steps. During RL, the policy model decides when to activate the projector, switching between latent and discrete reasoning as needed. Experimental results on plot hole detection and book chapter generation show that our method outperforms latent reasoning baselines and comes close to matching non-latent RL training, while reducing final reasoning length by 77--92\%. Overall, LiteReason guides RL training to a more efficient part of the performance-computation tradeoff curve.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 68
Loading