ThinkLess: A Training-Free Inference-Efficient Method for Reducing Reasoning Redundancy

ACL ARR 2025 May Submission5580 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: While Chain-of-Thought (CoT) prompting improves reasoning in large language models (LLMs), the excessive length of reasoning tokens increases latency and KV cache memory usage, and may even truncate final answers under context limits. We propose \textbf{ThinkLess}, an inference-efficient framework that terminates reasoning generation early and maintains output quality without modifying the model.Atttention analysis reveals that answer tokens focus minimally on earlier reasoning steps and primarily attend to the reasoning terminator token, due to information migration under causal masking. Building on this insight, ThinkLess inserts the terminator token at earlier positions to skip redundant reasoning while preserving the underlying knowledge transfer. To prevent format discruption casued by early termination, ThinkLess employs a lightweight post-regulation mechanism, relying on the model's natural instruction-following ability to produce well-structured answers. Without fine-tuning or auxiliary data, ThinkLess achieves comparable accuracy to full-length CoT decoding while greatly reducing decoding time and memory consumption.
Paper Type: Long
Research Area: Efficient/Low-Resource Methods for NLP
Research Area Keywords: CoT Reasoning, Reasoning Compression, Model Compression
Contribution Types: Approaches to low-resource settings, Approaches low compute settings-efficiency
Languages Studied: English
Keywords: Chain-of-Thought, CoT Compression, Efficient Reasoning
Submission Number: 5580
Loading