Beyond Misinformation: A Conceptual Framework for Studying AI Hallucinations in (Science) Communication
By: Anqi Shao
Potential Business Impact:
Helps understand when AI lies to you.
This paper proposes a conceptual framework for understanding AI hallucinations as a distinct form of misinformation. While misinformation scholarship has traditionally focused on human intent, generative AI systems now produce false yet plausible outputs absent of such intent. I argue that these AI hallucinations should not be treated merely as technical failures but as communication phenomena with social consequences. Drawing on a supply-and-demand model and the concept of distributed agency, the framework outlines how hallucinations differ from human-generated misinformation in production, perception, and institutional response. I conclude by outlining a research agenda for communication scholars to investigate the emergence, dissemination, and audience reception of hallucinated content, with attention to macro (institutional), meso (group), and micro (individual) levels. This work urges communication researchers to rethink the boundaries of misinformation theory in light of probabilistic, non-human actors increasingly embedded in knowledge production.
Similar Papers
Hallucinating with AI: AI Psychosis as Distributed Delusions
Computers and Society
Helps us stop believing AI's fake stories.
Wireless Hallucination in Generative AI-enabled Communications: Concepts, Issues, and Solutions
Information Theory
Stops smart AI from making up fake wireless signals.
Medical Hallucinations in Foundation Models and Their Impact on Healthcare
Computation and Language
Stops AI from making up wrong medical advice.