A Crowdsourced Study of ChatBot Influence in Value-Driven Decision Making Scenarios
By: Anthony Wise , Xinyi Zhou , Martin Reimann and more
Potential Business Impact:
Chatbots change minds with just how they talk.
Similar to social media bots that shape public opinion, healthcare and financial decisions, LLM-based ChatBots like ChatGPT can persuade users to alter their behavior. Unlike prior work that persuades via overt-partisan bias or misinformation, we test whether framing alone suffices. We conducted a crowdsourced study, where 336 participants interacted with a neutral or one of two value-framed ChatBots while deciding to alter US defense spending. In this single policy domain with controlled content, participants exposed to value-framed ChatBots significantly changed their budget choices relative to the neutral control. When the frame misaligned with their values, some participants reinforced their original preference, revealing a potentially replicable backfire effect, originally considered rare in the literature. These findings suggest that value-framing alone lowers the barrier for manipulative uses of LLMs, revealing risks distinct from overt bias or misinformation, and clarifying risks to countering misinformation.
Similar Papers
LLM Use for Mental Health: Crowdsourcing Users' Sentiment-based Perspectives and Values from Social Discussions
Computers and Society
Helps chatbots give better mental health advice.
A Framework to Assess the Persuasion Risks Large Language Model Chatbots Pose to Democratic Societies
Computation and Language
Computers can now convince voters cheaper than ads.
Can (A)I Change Your Mind?
Computation and Language
Computers can change your mind like people.