Show me the evidence: Evaluating the role of evidence and natural language explanations in AI-supported fact-checking
By: Greta Warren , Jingyi Sun , Irina Shklovski and more
Potential Business Impact:
People trust AI more when they see proof.
Although much research has focused on AI explanations to support decisions in complex information-seeking tasks such as fact-checking, the role of evidence is surprisingly under-researched. In our study, we systematically varied explanation type, AI prediction certainty, and correctness of AI system advice for non-expert participants, who evaluated the veracity of claims and AI system predictions. Participants were provided the option of easily inspecting the underlying evidence. We found that participants consistently relied on evidence to validate AI claims across all experimental conditions. When participants were presented with natural language explanations, evidence was used less frequently although they relied on it when these explanations seemed insufficient or flawed. Qualitative data suggests that participants attempted to infer evidence source reliability, despite source identities being deliberately omitted. Our results demonstrate that evidence is a key ingredient in how people evaluate the reliability of information presented by an AI system and, in combination with natural language explanations, offers valuable support for decision-making. Further research is urgently needed to understand how evidence ought to be presented and how people engage with it in practice.
Similar Papers
Can AI Explanations Make You Change Your Mind?
Human-Computer Interaction
Helps people trust AI by showing how it thinks.
Humans incorrectly reject confident accusatory AI judgments
Human-Computer Interaction
AI judges lies better than people, but we don't trust it.
FACTS&EVIDENCE: An Interactive Tool for Transparent Fine-Grained Factual Verification of Machine-Generated Text
Computation and Language
Helps you check if AI text is true.