From Anger to Joy: How Nationality Personas Shape Emotion Attribution in Large Language Models
By: Mahammed Kamruzzaman , Abdullah Al Monsur , Gene Louis Kim and more
Potential Business Impact:
Computers show biased emotions about countries.
Emotions are a fundamental facet of human experience, varying across individuals, cultural contexts, and nationalities. Given the recent success of Large Language Models (LLMs) as role-playing agents, we examine whether LLMs exhibit emotional stereotypes when assigned nationality-specific personas. Specifically, we investigate how different countries are represented in pre-trained LLMs through emotion attributions and whether these attributions align with cultural norms. Our analysis reveals significant nationality-based differences, with emotions such as shame, fear, and joy being disproportionately assigned across regions. Furthermore, we observe notable misalignment between LLM-generated and human emotional responses, particularly for negative emotions, highlighting the presence of reductive and potentially biased stereotypes in LLM outputs.
Similar Papers
Emergence of Hierarchical Emotion Organization in Large Language Models
Computation and Language
Computers learn to understand feelings like people.
Gender Bias in Emotion Recognition by Large Language Models
Computation and Language
Makes AI understand feelings without gender bias.
Large Language Models are Highly Aligned with Human Ratings of Emotional Stimuli
Artificial Intelligence
AI understands feelings like people do.