On Fact and Frequency: LLM Responses to Misinformation Expressed with Uncertainty
By: Yana van de Sande , Gunes Açar , Thabo van Woudenberg and more
Potential Business Impact:
AI believes false things when said with doubt.
We study LLM judgments of misinformation expressed with uncertainty. Our experiments study the response of three widely used LLMs (GPT-4o, LlaMA3, DeepSeek-v2) to misinformation propositions that have been verified false and then are transformed into uncertain statements according to an uncertainty typology. Our results show that after transformation, LLMs change their factchecking classification from false to not-false in 25% of the cases. Analysis reveals that the change cannot be explained by predictors to which humans are expected to be sensitive, i.e., modality, linguistic cues, or argumentation strategy. The exception is doxastic transformations, which use linguistic cue phrases such as "It is believed ...".To gain further insight, we prompt the LLM to make another judgment about the transformed misinformation statements that is not related to truth value. Specifically, we study LLM estimates of the frequency with which people make the uncertain statement. We find a small but significant correlation between judgment of fact and estimation of frequency.
Similar Papers
Unmasking Digital Falsehoods: A Comparative Analysis of LLM-Based Misinformation Detection Strategies
Computation and Language
Helps computers spot fake news online.
An Empirical Analysis of LLMs for Countering Misinformation
Computation and Language
Helps computers spot fake news, but needs improvement.
Profiling News Media for Factuality and Bias Using LLMs and the Fact-Checking Methodology of Human Experts
Computation and Language
Helps tell if news sources are trustworthy and biased.