Keywords: LLM reasoning, chain-of-thought, theory
TL;DR: We demonstrate fundamental limitations of CoT and propose two theory-grounded alternatives: Mechanistic Reasoning Elicitation (MRE) based on causal intervention theory, and Compositional Abstraction Reasoning (CAR) grounded in category theory.
Abstract: Chain-of-thought (CoT) prompting has emerged as a dominant paradigm for eliciting reasoning capabilities from large language models (LLMs). However, we argue that CoT provides only a superficial and non-generalizable view of neural network reasoning processes. Through theoretical analysis and empirical investigation, we demonstrate fundamental limitations of CoT in capturing the underlying computational mechanisms of LLMs. We propose two theory-grounded alternatives: \textit{Mechanistic Reasoning Elicitation} (MRE) based on causal intervention theory, and \textit{Compositional Abstraction Reasoning} (CAR) grounded in category theory. We provide theoretical guarantees for both approaches and demonstrate their superior generalization properties across diverse reasoning tasks. Our work establishes a new foundation for understanding and improving reasoning in large-scale neural networks.
Submission Number: 226
Loading