Don't Think Twice! Over-Reasoning Impairs Confidence Calibration
By: Romain Lacombe, Kerrie Wu, Eddie Dilworth
Potential Business Impact:
Makes AI more honest about what it knows.
Large Language Models deployed as question answering tools require robust calibration to avoid overconfidence. We systematically evaluate how reasoning capabilities and budget affect confidence assessment accuracy, using the ClimateX dataset (Lacombe et al., 2023) and expanding it to human and planetary health. Our key finding challenges the "test-time scaling" paradigm: while recent reasoning LLMs achieve 48.7% accuracy in assessing expert confidence, increasing reasoning budgets consistently impairs rather than improves calibration. Extended reasoning leads to systematic overconfidence that worsens with longer thinking budgets, producing diminishing and negative returns beyond modest computational investments. Conversely, search-augmented generation dramatically outperforms pure reasoning, achieving 89.3% accuracy by retrieving relevant evidence. Our results suggest that information access, rather than reasoning depth or inference budget, may be the critical bottleneck for improved confidence calibration of knowledge-intensive tasks.
Similar Papers
Don't Miss the Forest for the Trees: In-Depth Confidence Estimation for LLMs via Reasoning over the Answer Space
Computation and Language
Helps AI know how sure it is about answers.
Thought calibration: Efficient and confident test-time scaling
Machine Learning (CS)
Lets AI think less, save energy, and still be smart.
Large Reasoning Models are not thinking straight: on the unreliability of thinking trajectories
Machine Learning (CS)
Models get stuck thinking too much, ignore right answers.