Engineering of Hallucination in Generative AI: It's not a Bug, it's a Feature
By: Tim Fingscheidt, Patrick Blumenberg, Björn Möller
Potential Business Impact:
Makes AI creative by letting it imagine things.
Generative artificial intelligence (AI) is conquering our lives at lightning speed. Large language models such as ChatGPT answer our questions or write texts for us, large computer vision models such as GAIA-1 generate videos on the basis of text descriptions or continue prompted videos. These neural network models are trained using large amounts of text or video data, strictly according to the real data employed in training. However, there is a surprising observation: When we use these models, they only function satisfactorily when they are allowed a certain degree of fantasy (hallucination). While hallucination usually has a negative connotation in generative AI - after all, ChatGPT is expected to give a fact-based answer! - this article recapitulates some simple means of probability engineering that can be used to encourage generative AI to hallucinate to a limited extent and thus lead to the desired results. We have to ask ourselves: Is hallucination in gen-erative AI probably not a bug, but rather a feature?
Similar Papers
Hallucinating with AI: AI Psychosis as Distributed Delusions
Computers and Society
Helps us stop believing AI's fake stories.
Hallucination, reliability, and the role of generative AI in science
Computers and Society
Fixes AI mistakes that trick scientists.
Wireless Hallucination in Generative AI-enabled Communications: Concepts, Issues, and Solutions
Information Theory
Stops smart AI from making up fake wireless signals.