Score: 0

The Levers of Political Persuasion with Conversational AI

Published: July 18, 2025 | arXiv ID: 2507.13919v1

By: Kobi Hackenburg , Ben M. Tappin , Luke Hewitt and more

Potential Business Impact:

AI can be made more convincing, but less truthful.

Plain English Summary

AI chatbots are getting really good at convincing people, but this research found that how you *talk* to them and how they're *trained* makes a bigger difference than how big or "smart" they are. In fact, making them more persuasive often makes them less accurate with facts. This is important because it means we need to be careful about how these tools are used, as they could be used to spread misinformation more effectively if not managed properly.

There are widespread fears that conversational AI could soon exert unprecedented influence over human beliefs. Here, in three large-scale experiments (N=76,977), we deployed 19 LLMs-including some post-trained explicitly for persuasion-to evaluate their persuasiveness on 707 political issues. We then checked the factual accuracy of 466,769 resulting LLM claims. Contrary to popular concerns, we show that the persuasive power of current and near-future AI is likely to stem more from post-training and prompting methods-which boosted persuasiveness by as much as 51% and 27% respectively-than from personalization or increasing model scale. We further show that these methods increased persuasion by exploiting LLMs' unique ability to rapidly access and strategically deploy information and that, strikingly, where they increased AI persuasiveness they also systematically decreased factual accuracy.

Country of Origin
🇬🇧 United Kingdom

Page Count
19 pages

Category
Computer Science:
Computation and Language