Score: 1

Mapping Clinical Doubt: Locating Linguistic Uncertainty in LLMs

Published: November 27, 2025 | arXiv ID: 2511.22402v1

By: Srivarshinee Sridhar, Raghav Kaushik Ravi, Kripabandhu Ghosh

Potential Business Impact:

Helps AI understand when doctors are unsure.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) are increasingly used in clinical settings, where sensitivity to linguistic uncertainty can influence diagnostic interpretation and decision-making. Yet little is known about where such epistemic cues are internally represented within these models. Distinct from uncertainty quantification, which measures output confidence, this work examines input-side representational sensitivity to linguistic uncertainty in medical text. We curate a contrastive dataset of clinical statements varying in epistemic modality (e.g., 'is consistent with' vs. 'may be consistent with') and propose Model Sensitivity to Uncertainty (MSU), a layerwise probing metric that quantifies activation-level shifts induced by uncertainty cues. Our results show that LLMs exhibit structured, depth-dependent sensitivity to clinical uncertainty, suggesting that epistemic information is progressively encoded in deeper layers. These findings reveal how linguistic uncertainty is internally represented in LLMs, offering insight into their interpretability and epistemic reliability.

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Computation and Language