Can LLMs Detect Their Confabulations? Estimating Reliability in Uncertainty-Aware Language Models
By: Tianyi Zhou, Johanne Medina, Sanjay Chawla
Potential Business Impact:
Helps computers know when they are wrong.
Large Language Models (LLMs) are prone to generating fluent but incorrect content, known as confabulation, which poses increasing risks in multi-turn or agentic applications where outputs may be reused as context. In this work, we investigate how in-context information influences model behavior and whether LLMs can identify their unreliable responses. We propose a reliability estimation that leverages token-level uncertainty to guide the aggregation of internal model representations. Specifically, we compute aleatoric and epistemic uncertainty from output logits to identify salient tokens and aggregate their hidden states into compact representations for response-level reliability prediction. Through controlled experiments on open QA benchmarks, we find that correct in-context information improves both answer accuracy and model confidence, while misleading context often induces confidently incorrect responses, revealing a misalignment between uncertainty and correctness. Our probing-based method captures these shifts in model behavior and improves the detection of unreliable outputs across multiple open-source LLMs. These results underscore the limitations of direct uncertainty signals and highlight the potential of uncertainty-guided probing for reliability-aware generation.
Similar Papers
Mapping Clinical Doubt: Locating Linguistic Uncertainty in LLMs
Computation and Language
Helps AI understand when doctors are unsure.
Interpreting and Mitigating Unwanted Uncertainty in LLMs
Computation and Language
Fixes AI answers so they stay correct.
Estimating LLM Uncertainty with Evidence
Computation and Language
Helps computers know when they are wrong.