How good are humans at detecting AI-generated images? Learnings from an experiment
By: Thomas Roca , Anthony Cintron Roman , Jehú Torres Vega and more
Potential Business Impact:
People can't tell real pictures from fake ones.
As AI-powered image generation improves, a key question is how well human beings can differentiate between "real" and AI-generated or modified images. Using data collected from the online game "Real or Not Quiz.", this study investigates how effectively people can distinguish AI-generated images from real ones. Participants viewed a randomized set of real and AI-generated images, aiming to identify their authenticity. Analysis of approximately 287,000 image evaluations by over 12,500 global participants revealed an overall success rate of only 62\%, indicating a modest ability, slightly above chance. Participants were most accurate with human portraits but struggled significantly with natural and urban landscapes. These results highlight the inherent challenge humans face in distinguishing AI-generated visual content, particularly images without obvious artifacts or stylistic cues. This study stresses the need for transparency tools, such as watermarks and robust AI detection tools to mitigate the risks of misinformation arising from AI-generated content
Similar Papers
Can dialogues with AI systems help humans better discern visual misinformation?
Human-Computer Interaction
Talk to AI helps spot fake news now.
Characterizing Photorealism and Artifacts in Diffusion Model-Generated Images
Human-Computer Interaction
Helps tell real pictures from fake AI ones.
Dialogues with AI Reduce Beliefs in Misinformation but Build No Lasting Discernment Skills
Human-Computer Interaction
AI makes you worse at spotting fake news.