Keywords: Causal Reasoing; Chain-of-Thought; Large Language Models
TL;DR: We present Causal-CoT, a framework that integrates causal graph construction, completion, and verification into CoT.
Abstract: Chain-of-Thought (CoT) prompting enables large language models (LLMs) to expose intermediate reasoning, but the resulting rationales are often unfaithful—skipping premises, confusing relations, or relying on unsupported leaps. We propose Causal-CoT, a framework that integrates causal graph construction, augmentation, and verification into the CoT paradigm. Causal-CoT operates through a three-stage pipeline: (1) DAG-guided CoT constructs an initial causal graph from the problem context; (2) Reflection and Augmentation enriches the graph by adding plausible mediators and contextual variables; and (3) Causal Verification estimates conditional probabilities via prompting and applies do-calculus to compute causal effects. This structured approach transforms linear reasoning into graph-based inference, enabling more faithful and interpretable reasoning. Experiments across seven benchmarks in mathematics, commonsense, and causal reasoning show that Causal-CoT improves reasoning fidelity, mitigates shortcut behaviors, and achieves more stable performance compared to standard CoT. Moreover, Causal-CoT significantly enhances both reasoning fidelity and answer accuracy, effectively suppresses “jump-to-answer” shortcuts, and strikes a favorable balance between accuracy and computational cost.
Supplementary Material: zip
Primary Area: causal reasoning
Submission Number: 10722
Loading