Synthetic Socratic Debates: Examining Persona Effects on Moral Decision and Persuasion Dynamics
By: Jiarui Liu , Yueqi Song , Yunze Xiao and more
Potential Business Impact:
AI's personality changes how it argues about right and wrong.
As large language models (LLMs) are increasingly used in morally sensitive domains, it is crucial to understand how persona traits affect their moral reasoning and persuasive behavior. We present the first large-scale study of multi-dimensional persona effects in AI-AI debates over real-world moral dilemmas. Using a 6-dimensional persona space (age, gender, country, class, ideology, and personality), we simulate structured debates between AI agents over 131 relationship-based cases. Our results show that personas affect initial moral stances and debate outcomes, with political ideology and personality traits exerting the strongest influence. Persuasive success varies across traits, with liberal and open personalities reaching higher consensus and win rates. While logit-based confidence grows during debates, emotional and credibility-based appeals diminish, indicating more tempered argumentation over time. These trends mirror findings from psychology and cultural studies, reinforcing the need for persona-aware evaluation frameworks for AI moral reasoning.
Similar Papers
When Machines Join the Moral Circle: The Persona Effect of Generative AI Agents in Collaborative Reasoning
Human-Computer Interaction
AI helps people think more deeply about right and wrong.
Exploring Persona-dependent LLM Alignment for the Moral Machine Experiment
Computers and Society
AI makes different moral choices based on who it pretends to be.
Do Persona-Infused LLMs Affect Performance in a Strategic Reasoning Game?
Artificial Intelligence
Makes AI play strategy games better with roles.