Extending Epistemic Uncertainty Beyond Parameters Would Assist in Designing Reliable LLMs
By: T. Duy Nguyen-Hien , Desi R. Ivanova , Yee Whye Teh and more
Potential Business Impact:
Helps AI ask questions when unsure.
Although large language models (LLMs) are highly interactive and extendable, current approaches to ensure reliability in deployments remain mostly limited to rejecting outputs with high uncertainty in order to avoid misinformation. This conservative strategy reflects the current lack of tools to systematically distinguish and respond to different sources of uncertainty. In this paper, we advocate for the adoption of Bayesian Modeling of Experiments -- a framework that provides a coherent foundation to reason about uncertainty and clarify the reducibility of uncertainty -- for managing and proactively addressing uncertainty that arises in LLM deployments. This framework enables LLMs and their users to take contextually appropriate steps, such as requesting clarification, retrieving external information, or refining inputs. By supporting active resolution rather than passive avoidance, it opens the door to more reliable, transparent, and broadly applicable LLM systems, particularly in high-stakes, real-world settings.
Similar Papers
Textual Bayes: Quantifying Uncertainty in LLM-Based Systems
Machine Learning (CS)
Makes AI smarter and more honest about what it knows.
The challenge of uncertainty quantification of large language models in medicine
Artificial Intelligence
Helps doctors know when AI is unsure about health advice.
A Survey of Uncertainty Estimation Methods on Large Language Models
Computation and Language
Helps AI tell when it's making things up.