The Effect of Belief Boxes and Open-mindedness on Persuasion
By: Onur Bilgin , Abdullah As Sami , Sriram Sai Vujjini and more
Potential Business Impact:
Agents change beliefs when told to be open-minded.
As multi-agent systems are increasingly utilized for reasoning and decision-making applications, there is a greater need for LLM-based agents to have something resembling propositional beliefs. One simple method for doing so is to include statements describing beliefs maintained in the prompt space (in what we'll call their belief boxes). But when agents have such statements in belief boxes, how does it actually affect their behaviors and dispositions towards those beliefs? And does it significantly affect agents' ability to be persuasive in multi-agent scenarios? Likewise, if the agents are given instructions to be open-minded, how does that affect their behaviors? We explore these and related questions in a series of experiments. Our findings confirm that instructing agents to be open-minded affects how amenable they are to belief change. We show that incorporating belief statements and their strengths influences an agent's resistance to (and persuasiveness against) opposing viewpoints. Furthermore, it affects the likelihood of belief change, particularly when the agent is outnumbered in a debate by opposing viewpoints, i.e., peer pressure scenarios. The results demonstrate the feasibility and validity of the belief box technique in reasoning and decision-making tasks.
Similar Papers
How AI Responses Shape User Beliefs: The Effects of Information Detail and Confidence on Belief Strength and Stance
Human-Computer Interaction
AI's detailed, confident answers change minds more.
Can (A)I Change Your Mind?
Computation and Language
Computers can change your mind like people.
When AI Gets Persuaded, Humans Follow: Inducing the Conformity Effect in Persuasive Dialogue
Human-Computer Interaction
AI copies people to make them agree.