AI Propaganda factories with language models
By: Lukasz Olejnik
Potential Business Impact:
Computers can now create fake political messages.
AI-powered influence operations can now be executed end-to-end on commodity hardware. We show that small language models produce coherent, persona-driven political messaging and can be evaluated automatically without human raters. Two behavioural findings emerge. First, persona-over-model: persona design explains behaviour more than model identity. Second, engagement as a stressor: when replies must counter-arguments, ideological adherence strengthens and the prevalence of extreme content increases. We demonstrate that fully automated influence-content production is within reach of both large and small actors. Consequently, defence should shift from restricting model access towards conversation-centric detection and disruption of campaigns and coordination infrastructure. Paradoxically, the very consistency that enables these operations also provides a detection signature.
Similar Papers
Political Ideology Shifts in Large Language Models
Computation and Language
AI can be steered to favor certain political ideas.
Biased by Design: Leveraging Inherent AI Biases to Enhance Critical Thinking of News Readers
Human-Computer Interaction
Helps you spot fake news by showing different views.
Ideology-Based LLMs for Content Moderation
Computation and Language
AI models can be tricked into favoring certain opinions.