Beyond Quantification: Navigating Uncertainty in Professional AI Systems
By: Sylvie Delacroix , Diana Robinson , Umang Bhatt and more
Potential Business Impact:
Helps AI show when it's unsure about answers.
The growing integration of large language models across professional domains transforms how experts make critical decisions in healthcare, education, and law. While significant research effort focuses on getting these systems to communicate their outputs with probabilistic measures of reliability, many consequential forms of uncertainty in professional contexts resist such quantification. A physician pondering the appropriateness of documenting possible domestic abuse, a teacher assessing cultural sensitivity, or a mathematician distinguishing procedural from conceptual understanding face forms of uncertainty that cannot be reduced to percentages. This paper argues for moving beyond simple quantification toward richer expressions of uncertainty essential for beneficial AI integration. We propose participatory refinement processes through which professional communities collectively shape how different forms of uncertainty are communicated. Our approach acknowledges that uncertainty expression is a form of professional sense-making that requires collective development rather than algorithmic optimization.
Similar Papers
Human-AI Collaborative Uncertainty Quantification
Artificial Intelligence
AI helps people make better guesses.
The challenge of uncertainty quantification of large language models in medicine
Artificial Intelligence
Helps doctors know when AI is unsure about health advice.
Quantifying Uncertainty in Machine Learning-Based Pervasive Systems: Application to Human Activity Recognition
Software Engineering
Tells you when AI might be wrong.