Don’t Think Twice! Over-Reasoning Impairs Confidence Calibration

Published: 01 Jul 2025, Last Modified: 13 Jul 2025ICML 2025 R2-FM Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: reasoning models, reasoning budget, confidence, calibration, evaluation, benchmark, climate, public health
TL;DR: Increasing the thinking budget of reasoning LLMs makes them over-confident, and less accurate at assessing the confidence of human experts in statements about climate and public health.
Abstract: Large Language Models deployed as question answering tools require robust calibration to avoid overconfidence. We systematically evaluate how reasoning capabilities and budget affect confidence assessment accuracy, using the ClimateX dataset (Lacombe et al., 2023) and expanding it to human and planetary health. Our key finding challenges the ``test-time scaling'' paradigm: while recent reasoning LLMs achieve 48.7\% accuracy in assessing expert confidence, increasing reasoning budgets consistently impairs rather than improves calibration. Extended reasoning leads to systematic overconfidence that worsens with longer thinking budgets, producing diminishing and negative returns beyond modest computational investments. Conversely, search-augmented generation dramatically outperforms pure reasoning, achieving 89.3\% accuracy by retrieving relevant evidence. Our results suggest that information access, rather than reasoning depth or inference budget, may be the critical bottleneck for improved confidence calibration of knowledge-intensive tasks.
Submission Number: 177
Loading