Emergence of Hierarchical Emotion Organization in Large Language Models
By: Bo Zhao , Maya Okawa , Eric J. Bigelow and more
Potential Business Impact:
Computers learn to understand feelings like people.
As large language models (LLMs) increasingly power conversational agents, understanding how they model users' emotional states is critical for ethical deployment. Inspired by emotion wheels -- a psychological framework that argues emotions organize hierarchically -- we analyze probabilistic dependencies between emotional states in model outputs. We find that LLMs naturally form hierarchical emotion trees that align with human psychological models, and larger models develop more complex hierarchies. We also uncover systematic biases in emotion recognition across socioeconomic personas, with compounding misclassifications for intersectional, underrepresented groups. Human studies reveal striking parallels, suggesting that LLMs internalize aspects of social perception. Beyond highlighting emergent emotional reasoning in LLMs, our results hint at the potential of using cognitively-grounded theories for developing better model evaluations.
Similar Papers
Large Language Models are Highly Aligned with Human Ratings of Emotional Stimuli
Artificial Intelligence
AI understands feelings like people do.
AI with Emotions: Exploring Emotional Expressions in Large Language Models
Artificial Intelligence
Computers can now show feelings when they talk.
When Large Language Models are Reliable for Judging Empathic Communication
Computation and Language
Computers understand feelings better than people.