Score: 0

Human-Alignment and Calibration of Inference-Time Uncertainty in Large Language Models

Published: August 11, 2025 | arXiv ID: 2508.08204v1

By: Kyle Moore, Jesse Roberts, Daryl Watson

Potential Business Impact:

Helps computers know when they are unsure.

There has been much recent interest in evaluating large language models for uncertainty calibration to facilitate model control and modulate user trust. Inference time uncertainty, which may provide a real-time signal to the model or external control modules, is particularly important for applying these concepts to improve LLM-user experience in practice. While many of the existing papers consider model calibration, comparatively little work has sought to evaluate how closely model uncertainty aligns to human uncertainty. In this work, we evaluate a collection of inference-time uncertainty measures, using both established metrics and novel variations, to determine how closely they align with both human group-level uncertainty and traditional notions of model calibration. We find that numerous measures show evidence of strong alignment to human uncertainty, even despite the lack of alignment to human answer preference. For those successful metrics, we find moderate to strong evidence of model calibration in terms of both correctness correlation and distributional analysis.

Country of Origin
🇺🇸 United States

Page Count
12 pages

Category
Computer Science:
Computation and Language