Mapping Clinical Doubt: Locating Linguistic Uncertainty in LLMs
By: Srivarshinee Sridhar, Raghav Kaushik Ravi, Kripabandhu Ghosh
Potential Business Impact:
Helps AI understand when doctors are unsure.
Large Language Models (LLMs) are increasingly used in clinical settings, where sensitivity to linguistic uncertainty can influence diagnostic interpretation and decision-making. Yet little is known about where such epistemic cues are internally represented within these models. Distinct from uncertainty quantification, which measures output confidence, this work examines input-side representational sensitivity to linguistic uncertainty in medical text. We curate a contrastive dataset of clinical statements varying in epistemic modality (e.g., 'is consistent with' vs. 'may be consistent with') and propose Model Sensitivity to Uncertainty (MSU), a layerwise probing metric that quantifies activation-level shifts induced by uncertainty cues. Our results show that LLMs exhibit structured, depth-dependent sensitivity to clinical uncertainty, suggesting that epistemic information is progressively encoded in deeper layers. These findings reveal how linguistic uncertainty is internally represented in LLMs, offering insight into their interpretability and epistemic reliability.
Similar Papers
The challenge of uncertainty quantification of large language models in medicine
Artificial Intelligence
Helps doctors know when AI is unsure about health advice.
Can LLMs Detect Their Confabulations? Estimating Reliability in Uncertainty-Aware Language Models
Computation and Language
Helps computers know when they are wrong.
Measuring Aleatoric and Epistemic Uncertainty in LLMs: Empirical Evaluation on ID and OOD QA Tasks
Computation and Language
Helps computers know when they are unsure.