Confirmation: I have read and agree with the workshop's policy on behalf of myself and my co-authors.
Keywords: Large Language Models, Chain-of-Thought Reasoning, Confidence Calibration, Uncertainty Quantification, Topological Data Analysis, Dirichlet Distribution, Expected Calibration Error, Multi-Step Reasoning, Semantic Embeddings, Geometric Risk Analysis
TL;DR: Chain-of-thought LLMs are often overconfident. EDTR boosts calibration by using topological and Dirichlet features of reasoning paths, achieving 41% lower error and top reliability across four benchmarks, especially in math reasoning tasks.
Abstract: Chain-of-thought (CoT) prompting enables Large Language Models to solve complex problems, but deploying these models safely requires reliable confidence estimates—a capability where existing methods suffer from poor calibration and severe overconfidence on incorrect predictions. We propose Enhanced Dirichlet+Topology Risk (EDTR), a novel decoding strategy that combines topological analysis with Dirichlet-based uncertainty quantification to measure LLM confidence across multiple reasoning paths. EDTR treats each CoT as a vector in high-dimensional space and extracts eight topological risk features capturing the geometric structure of reasoning distributions: tighter, more coherent clusters indicate higher confidence while dispersed, inconsistent paths signal uncertainty. We evaluate EDTR against three state-of-the-art calibration methods across four diverse reasoning benchmarks spanning olympiad-level mathematics (AIME), grade school math (GSM8K), commonsense reasoning, and stock price prediction. EDTR achieves 41\% better calibration than competing methods with an average ECE of 0.287 and the best overall composite score of 0.672, while notably achieving perfect accuracy on AIME and exceptional calibration on GSM8K with an ECE of 0.107—domains where baselines exhibit severe overconfidence. Our work provides a geometric framework for understanding and quantifying uncertainty in multi-step LLM reasoning, enabling more reliable deployment where calibrated confidence estimates are essential.
Supplementary Material: zip
Submission Track: Workshop Paper Track
Submission Number: 24
Loading