From Emotion Classification to Emotional Reasoning: Enhancing Emotional Intelligence in Large Language Models
By: Arjhun Sreedar, Rohan Pillay, Laukik Patade
Potential Business Impact:
Teaches AI to understand feelings better.
This work investigates whether synthetic emotional chain-of-thought data can improve the emotional reasoning abilities of smaller open large language models (LLMs). We design a multi-agent generation pipeline that produces therapy-style conversations and converts them into structured emotion multiple-choice questions (MCQs) with explanations. We propose that fine-tuning a variety of 7B models on this dataset should yield substantial gains in emotional understanding and emotional awareness on EmoBench-style evaluations, suggesting that emotional reasoning can be induced without architectural changes. Our results demonstrate that fine-tuned Mistral 7B achieves EU improvements from 10.5 to 20.5 and EA improvements from 40.5 to 60.0, validating the effectiveness of synthetic emotional reasoning data for enhancing model capabilities in nuanced emotional tasks.
Similar Papers
Why We Feel: Breaking Boundaries in Emotional Reasoning with Multimodal Large Language Models
Artificial Intelligence
Helps AI understand *why* people feel emotions.
Beyond Context to Cognitive Appraisal: Emotion Reasoning as a Theory of Mind Benchmark for Large Language Models
Computation and Language
Helps computers understand feelings from hidden clues.
EICAP: Deep Dive in Assessment and Enhancement of Large Language Models in Emotional Intelligence through Multi-Turn Conversations
Computation and Language
Teaches computers to understand and respond to feelings.