Potential Business Impact:
AI can spread rumors, causing new kinds of harm.
Generative AI chatbots like OpenAI's ChatGPT and Google's Gemini routinely make things up. They "hallucinate" historical events and figures, legal cases, academic papers, non-existent tech products and features, biographies, and news articles. Recently, some have argued that these hallucinations are better understood as bullshit. Chatbots produce rich streams of text that look truth-apt without any concern for the truthfulness of what this text says. But can they also gossip? We argue that they can. After some definitions and scene-setting, we focus on a recent example to clarify what AI gossip looks like before considering some distinct harms -- what we call "technosocial harms" -- that follow from it.
Similar Papers
Hallucinating with AI: AI Psychosis as Distributed Delusions
Computers and Society
Helps us stop believing AI's fake stories.
Just Asking Questions: Doing Our Own Research on Conspiratorial Ideation by Generative AI Chatbots
Computers and Society
AI chatbots sometimes spread fake conspiracy stories.
Engineering of Hallucination in Generative AI: It's not a Bug, it's a Feature
Computation and Language
Makes AI creative by letting it imagine things.