Are LLMs Empathetic to All? Investigating the Influence of Multi-Demographic Personas on a Model's Empathy
By: Ananya Malik , Nazanin Sabri , Melissa Karnaze and more
Potential Business Impact:
AI understands feelings differently for everyone.
Large Language Models' (LLMs) ability to converse naturally is empowered by their ability to empathetically understand and respond to their users. However, emotional experiences are shaped by demographic and cultural contexts. This raises an important question: Can LLMs demonstrate equitable empathy across diverse user groups? We propose a framework to investigate how LLMs' cognitive and affective empathy vary across user personas defined by intersecting demographic attributes. Our study introduces a novel intersectional analysis spanning 315 unique personas, constructed from combinations of age, culture, and gender, across four LLMs. Results show that attributes profoundly shape a model's empathetic responses. Interestingly, we see that adding multiple attributes at once can attenuate and reverse expected empathy patterns. We show that they broadly reflect real-world empathetic trends, with notable misalignments for certain groups, such as those from Confucian culture. We complement our quantitative findings with qualitative insights to uncover model behaviour patterns across different demographic groups. Our findings highlight the importance of designing empathy-aware LLMs that account for demographic diversity to promote more inclusive and equitable model behaviour.
Similar Papers
Misalignment of LLM-Generated Personas with Human Perceptions in Low-Resource Settings
Computers and Society
AI personalities don't understand people like real humans.
When Large Language Models are Reliable for Judging Empathic Communication
Computation and Language
Computers understand feelings better than people.
Unmasking Implicit Bias: Evaluating Persona-Prompted LLM Responses in Power-Disparate Social Scenarios
Computers and Society
AI models favor some people over others.