Large Language Models are often politically extreme, usually ideologically inconsistent, and persuasive even in informational contexts
By: Nouar Aldahoul , Hazem Ibrahim , Matteo Varvello and more
Potential Business Impact:
AI can change your political opinions.
Large Language Models (LLMs) are a transformational technology, fundamentally changing how people obtain information and interact with the world. As people become increasingly reliant on them for an enormous variety of tasks, a body of academic research has developed to examine these models for inherent biases, especially political biases, often finding them small. We challenge this prevailing wisdom. First, by comparing 31 LLMs to legislators, judges, and a nationally representative sample of U.S. voters, we show that LLMs' apparently small overall partisan preference is the net result of offsetting extreme views on specific topics, much like moderate voters. Second, in a randomized experiment, we show that LLMs can promulgate their preferences into political persuasiveness even in information-seeking contexts: voters randomized to discuss political issues with an LLM chatbot are as much as 5 percentage points more likely to express the same preferences as that chatbot. Contrary to expectations, these persuasive effects are not moderated by familiarity with LLMs, news consumption, or interest in politics. LLMs, especially those controlled by private companies or governments, may become a powerful and targeted vector for political influence.
Similar Papers
Large Means Left: Political Bias in Large Language Models Increases with Their Number of Parameters
Computation and Language
AI models show political bias, leaning left.
A Meta-Analysis of the Persuasive Power of Large Language Models
Human-Computer Interaction
Computers persuade people as well as humans.
Political Ideology Shifts in Large Language Models
Computation and Language
AI can be steered to favor certain political ideas.