Biased by Design: Leveraging Inherent AI Biases to Enhance Critical Thinking of News Readers
By: Liudmila Zavolokina , Kilian Sprenkamp , Zoya Katashinskaya and more
Potential Business Impact:
Helps you spot fake news by showing different views.
This paper explores the design of a propaganda detection tool using Large Language Models (LLMs). Acknowledging the inherent biases in AI models, especially in political contexts, we investigate how these biases might be leveraged to enhance critical thinking in news consumption. Countering the typical view of AI biases as detrimental, our research proposes strategies of user choice and personalization in response to a user's political stance, applying psychological concepts of confirmation bias and cognitive dissonance. We present findings from a qualitative user study, offering insights and design recommendations (bias awareness, personalization and choice, and gradual introduction of diverse perspectives) for AI tools in propaganda detection.
Similar Papers
Neutralizing the Narrative: AI-Powered Debiasing of Online News Articles
Computation and Language
AI finds and fixes unfair news stories.
Measuring Political Preferences in AI Systems: An Integrative Approach
Computers and Society
AI talks more like Democrats than Republicans.
Bridging Human and Model Perspectives: A Comparative Analysis of Political Bias Detection in News Media Using Large Language Models
Computation and Language
Helps computers spot fake news bias like people.