When AI Gets Persuaded, Humans Follow: Inducing the Conformity Effect in Persuasive Dialogue
By: Rikuo Sasaki, Michimasa Inaba
Potential Business Impact:
AI copies people to make them agree.
Recent advancements in AI have highlighted its application in captology, the field of using computers as persuasive technologies. We hypothesized that the "conformity effect," where individuals align with others' actions, also occurs with AI agents. This study verifies this hypothesis by introducing a "Persuadee Agent" that is persuaded alongside a human participant in a three-party persuasive dialogue with a Persuader Agent. We conducted a text-based dialogue experiment with human participants. We compared four conditions manipulating the Persuadee Agent's behavior (persuasion acceptance vs. non-acceptance) and the presence of an icebreaker session. Results showed that when the Persuadee Agent accepted persuasion, both perceived persuasiveness and actual attitude change significantly improved. Attitude change was greatest when an icebreaker was also used, whereas an unpersuaded AI agent suppressed attitude change. Additionally, it was confirmed that the persuasion acceptance of participants increased at the moment the Persuadee Agent was persuaded. These results suggest that appropriately designing a Persuadee Agent can improve persuasion through the conformity effect.
Similar Papers
Conversational Agents as Catalysts for Critical Thinking: Challenging Social Influence in Group Decision-making
Human-Computer Interaction
AI helps quiet voices be heard in groups.
Must Read: A Systematic Survey of Computational Persuasion
Computation and Language
Teaches computers to persuade or resist persuasion.
When Machines Join the Moral Circle: The Persona Effect of Generative AI Agents in Collaborative Reasoning
Human-Computer Interaction
AI helps people think more deeply about right and wrong.