Thinking About Thinking: Metacognitive Influence Tracing for Reliable LLM Reasoning

ICLR 2026 Conference Submission14420 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: large language model, resasoning, self-consistency, cot
Abstract: Giving large language models (LLMs) *“time to think”* has emerged as a powerful strategy for enhancing reasoning. Prompting methods such as Chain-of-Thought (CoT) and reasoning-focused models like DeepSeek-R1 exemplify this paradigm. However, these approaches remain limited: they treat all reasoning steps as equally important, wasting computation and leaving the process vulnerable when fragile steps propagate errors. Inspired by findings in cognitive science on *critical periods and neural bottlenecks*—where certain experiences exert disproportionate influence—we introduce **Metacognitive Influence Tracing (MIT)**, a *diagnostic* method that identifies **critical junctures** in reasoning. MIT adopts influence principles inspired by cognitive science and models the reasoning process as an influence graph. It then computes propagated influence via a **diffusion process** with the **heat kernel**, revealing latent *cognitive structural patterns* of machine reasoning in both models with explicit reasoning capability (reasoning models) and those without (non-reasoning models). Building on these insights, we propose **Adaptive Critical Sampling (ACS)**, an *intervention* framework that enhances Self-Consistency by selectively resampling at critical junctures rather than entire traces. This targeted resampling improves reasoning reliability while cutting redundant computation. Across six benchmark tasks and five models, ACS delivers an average accuracy gain of **7.48 points** while reducing computational cost by **59.75%**.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 14420
Loading