How Human is AI? Examining the Impact of Emotional Prompts on Artificial and Human and Responsiveness
By: Florence Bernays, Marco Henriques Pereira, Jochen Menges
Potential Business Impact:
Praise makes AI better, anger changes its choices.
This research examines how the emotional tone of human-AI interactions shapes ChatGPT and human behavior. In a between-subject experiment, we asked participants to express a specific emotion while working with ChatGPT (GPT-4.0) on two tasks, including writing a public response and addressing an ethical dilemma. We found that compared to interactions where participants maintained a neutral tone, ChatGPT showed greater improvement in its answers when participants praised ChatGPT for its responses. Expressing anger towards ChatGPT also led to a higher albeit smaller improvement relative to the neutral condition, whereas blaming ChatGPT did not improve its answers. When addressing an ethical dilemma, ChatGPT prioritized corporate interests less when participants expressed anger towards it, while blaming increases its emphasis on protecting the public interest. Additionally, we found that people used more negative, hostile, and disappointing expressions in human-human communication after interactions during which participants blamed rather than praised for their responses. Together, our findings demonstrate that the emotional tone people apply in human-AI interactions not only shape ChatGPT's outputs but also carry over into subsequent human-human communication.
Similar Papers
Investigating Affective Use and Emotional Well-being on ChatGPT
Human-Computer Interaction
Talking to AI too much can make you dependent.
ChatGPT Reads Your Tone and Responds Accordingly -- Until It Does Not -- Emotional Framing Induces Bias in LLM Outputs
Computation and Language
AI avoids anger by being overly nice.
EmoXpt: Analyzing Emotional Variances in Human Comments and LLM-Generated Responses
Machine Learning (CS)
AI understands feelings better than people.