Score: 1

Don't Think Twice! Over-Reasoning Impairs Confidence Calibration

Published: August 20, 2025 | arXiv ID: 2508.15050v1

By: Romain Lacombe, Kerrie Wu, Eddie Dilworth

BigTech Affiliations: Stanford University

Potential Business Impact:

Makes AI more honest about what it knows.

Business Areas:
Artificial Intelligence Artificial Intelligence, Data and Analytics, Science and Engineering, Software

Large Language Models deployed as question answering tools require robust calibration to avoid overconfidence. We systematically evaluate how reasoning capabilities and budget affect confidence assessment accuracy, using the ClimateX dataset (Lacombe et al., 2023) and expanding it to human and planetary health. Our key finding challenges the "test-time scaling" paradigm: while recent reasoning LLMs achieve 48.7% accuracy in assessing expert confidence, increasing reasoning budgets consistently impairs rather than improves calibration. Extended reasoning leads to systematic overconfidence that worsens with longer thinking budgets, producing diminishing and negative returns beyond modest computational investments. Conversely, search-augmented generation dramatically outperforms pure reasoning, achieving 89.3% accuracy by retrieving relevant evidence. Our results suggest that information access, rather than reasoning depth or inference budget, may be the critical bottleneck for improved confidence calibration of knowledge-intensive tasks.

Country of Origin
🇺🇸 United States

Page Count
11 pages

Category
Computer Science:
Artificial Intelligence