AI labeling reduces the perceived accuracy of online content but has limited broader effects
By: Chuyao Wang, Patrick Sturgis, Daniel de Kadt
Potential Business Impact:
AI labels make news seem less true.
Explicit labeling of online content produced by artificial intelligence (AI) is a widely mooted policy for ensuring transparency and promoting public confidence. Yet little is known about the scope of AI labeling effects on public assessments of labeled content. We contribute new evidence on this question from a survey experiment using a high-quality nationally representative probability sample (n = 3,861). First, we demonstrate that explicit AI labeling of a news article about a proposed public policy reduces its perceived accuracy. Second, we test whether there are spillover effects in terms of policy interest, policy support, and general concerns about online misinformation. We find that AI labeling reduces interest in the policy, but neither influences support for the policy nor triggers general concerns about online misinformation. We further find that increasing the salience of AI use reduces the negative impact of AI labeling on perceived accuracy, while one-sided versus two-sided framing of the policy has no moderating effect. Overall, our findings suggest that the effects of algorithm aversion induced by AI labeling of online content are limited in scope.
Similar Papers
Labeling Messages as AI-Generated Does Not Reduce Their Persuasive Effects
Computers and Society
Labels don't stop AI messages from changing minds.
Dialogues with AI Reduce Beliefs in Misinformation but Build No Lasting Discernment Skills
Human-Computer Interaction
AI makes you worse at spotting fake news.
Labeling Synthetic Content: User Perceptions of Warning Label Designs for AI-generated Content on Social Media
Human-Computer Interaction
Helps people spot fake online videos and pictures.