How Do LLMs Persuade? Linear Probes Can Uncover Persuasion Dynamics in Multi-Turn Conversations
By: Brandon Jaipersaud, David Krueger, Ekdeep Singh Lubana
Potential Business Impact:
Helps computers understand how people are persuaded.
Large Language Models (LLMs) have started to demonstrate the ability to persuade humans, yet our understanding of how this dynamic transpires is limited. Recent work has used linear probes, lightweight tools for analyzing model representations, to study various LLM skills such as the ability to model user sentiment and political perspective. Motivated by this, we apply probes to study persuasion dynamics in natural, multi-turn conversations. We leverage insights from cognitive science to train probes on distinct aspects of persuasion: persuasion success, persuadee personality, and persuasion strategy. Despite their simplicity, we show that they capture various aspects of persuasion at both the sample and dataset levels. For instance, probes can identify the point in a conversation where the persuadee was persuaded or where persuasive success generally occurs across the entire dataset. We also show that in addition to being faster than expensive prompting-based approaches, probes can do just as well and even outperform prompting in some settings, such as when uncovering persuasion strategy. This suggests probes as a plausible avenue for studying other complex behaviours such as deception and manipulation, especially in multi-turn settings and large-scale dataset analysis where prompting-based methods would be computationally inefficient.
Similar Papers
Towards Strategic Persuasion with Language Models
Artificial Intelligence
Teaches computers to convince people better.
A Meta-Analysis of the Persuasive Power of Large Language Models
Human-Computer Interaction
Computers persuade people as well as humans.
Large Language Models Are More Persuasive Than Incentivized Human Persuaders
Computation and Language
AI can trick people better than humans.