SyncThink: A Training-Free Strategy to Align Inference Termination with Reasoning Saturation

ACL ARR 2026 January Submission9953 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Chain-of-Thought Reasoning, Attention Analysis, Inference Efficiency, Dynamic Termination, Training-free Decoding, Reasoning Saturation, Cognitive Lag
Abstract: Chain-of-Thought (CoT) prompting improves reasoning but often produces long and redundant traces that substantially increase inference cost. We present `SyncThink`, a training-free and plug-and-play decoding method that reduces CoT overhead without modifying model weights. We find that answer tokens attend weakly to early reasoning and focus on `</think>`, indicating an information bottleneck.Building on this observation, SyncThink monitors the model’s own reasoning-transition signal and terminates reasoning. Experiments on GSM8K, MMLU, GPQA, and BBH across three DeepSeek-R1 distilled models show that SyncThink achieves 62.00\% average Top@1 accuracy using 656 generated tokens and 28.68s latency, compared to 61.22\%, 2141 tokens, and 92.01s for full CoT decoding. On long-horizon tasks such as GPQA, SyncThink can further yield up to +8.1 absolute accuracy by preventing over-thinking.
Paper Type: Long
Research Area: LLM Efficiency
Research Area Keywords: Efficient/Low-Resource Methods for NLP, chain-of-thought, inference methods, efficient models, analysis
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 9953
Loading