SDA-CoT: Structure-driven Dynamic Active Chain-of-Thought

15 Sept 2025 (modified: 12 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: chain-of-thought; large language model; bayesian active learning; name entity recognization
Abstract: Chain-of-Thought (CoT) helps large language models (LLMs) reason by making intermediate steps explicit. However, many methods still rely on fixed human exemplars and neglect structural cues such as entity relations, which can yield confident but faulty reasoning paths. This paper presents Structure-driven Dynamic Active Chain-of-Thought (SDA-CoT), a framework that combines uncertainty-based exemplar selection with structure-aware reasoning to address these issues. SDA-CoT uses Bayesian Active Learning (BAL) to select exemplars with high uncertainty and strong expected value, applies entity and relation extraction to build relational scaffolds, and then produces reasoning paths that remain logically coherent and consistent with context. Across three dataset families (commonsense reasoning, logical reasoning, and math word problems) and two LLMs (LLaMA2-13B and DeepSeek-R1), SDA-CoT surpasses standard CoT methods. In LLaMA2-13B, the accuracy increases by 9\% on average, with a gain of 12\% on GSM8K. In DeepSeek-R1, the average gain is 8\%. The combination of entity–relation analysis with adaptive prompting produces robust and interpretable CoT and provides the first empirical evidence that BAL can significantly improve CoT reasoning in LLMs.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 5833
Loading