Keywords: Thinking Traps, Prefix Dominance, Trap Index, Trap-Aware Adaptive Restart, Long Chain-of-Thought
Abstract: Scaling test-time compute via Long Chain-of-Thought (Long-CoT) significantly enhances reasoning capabilities, yet extended generation does not guarantee correctness: after an early wrong commitment, models may keep elaborating a self-consistent but incorrect prefix.
Through fine-grained trajectory analysis, we identify Thinking Traps, prefix-dominant deadlocks where later reflection, alternative attempts, or verification fails to revise the root error. On a curated subset of DAPO-MATH, 89\% of failures exhibit such traps. To solve this problem, we introduce TAAR (Trap-Aware Adaptive Restart), a test-time control framework that trains a diagnostic policy to predict two signals from partial trajectories: a trap index for where to truncate and an escape probability for whether and how strongly to intervene.
At inference time, TAAR truncates the trajectory before the predicted trap segment and adaptively restarts decoding; for severely trapped cases, it applies stronger perturbations, including higher-temperature resampling and an optional structured reboot suffix.
Experiments on challenging mathematical and scientific reasoning benchmarks (AIME24, AIME25, GPQA-Diamond, HMMT25, BRUMO25) show that TAAR improves reasoning performance without fine-tuning base model parameters.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: Discourse, Pragmatics, and Reasoning, Language Modeling,Resources and Evaluation
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources, Data analysis, Position papers
Languages Studied: English, Chinese, Korean, Russian, Arabic, and French
Submission Number: 10023
Loading