Representations of Fact, Fiction and Forecast in Large Language Models: Epistemics and Attitudes
By: Meng Li, Michael Vrazitulis, David Schlangen
Potential Business Impact:
Computers better show when they are unsure.
Rational speakers are supposed to know what they know and what they do not know, and to generate expressions matching the strength of evidence. In contrast, it is still a challenge for current large language models to generate corresponding utterances based on the assessment of facts and confidence in an uncertain real-world environment. While it has recently become popular to estimate and calibrate confidence of LLMs with verbalized uncertainty, what is lacking is a careful examination of the linguistic knowledge of uncertainty encoded in the latent space of LLMs. In this paper, we draw on typological frameworks of epistemic expressions to evaluate LLMs' knowledge of epistemic modality, using controlled stories. Our experiments show that the performance of LLMs in generating epistemic expressions is limited and not robust, and hence the expressions of uncertainty generated by LLMs are not always reliable. To build uncertainty-aware LLMs, it is necessary to enrich semantic knowledge of epistemic modality in LLMs.
Similar Papers
Exploring the Potential for Large Language Models to Demonstrate Rational Probabilistic Beliefs
Artificial Intelligence
Makes AI understand "maybe" better for trust.
Mapping Clinical Doubt: Locating Linguistic Uncertainty in LLMs
Computation and Language
Helps AI understand when doctors are unsure.
Extending Epistemic Uncertainty Beyond Parameters Would Assist in Designing Reliable LLMs
Machine Learning (CS)
Helps AI ask questions when unsure.