Can Large Language Models Generate Effective Datasets for Emotion Recognition in Conversations?
By: Burak Can Kaplan, Hugo Cesar De Castro Carneiro, Stefan Wermter
Potential Business Impact:
Makes computers understand feelings in talking.
Emotion recognition in conversations (ERC) focuses on identifying emotion shifts within interactions, representing a significant step toward advancing machine intelligence. However, ERC data remains scarce, and existing datasets face numerous challenges due to their highly biased sources and the inherent subjectivity of soft labels. Even though Large Language Models (LLMs) have demonstrated their quality in many affective tasks, they are typically expensive to train, and their application to ERC tasks--particularly in data generation--remains limited. To address these challenges, we employ a small, resource-efficient, and general-purpose LLM to synthesize ERC datasets with diverse properties, supplementing the three most widely used ERC benchmarks. We generate six novel datasets, with two tailored to enhance each benchmark. We evaluate the utility of these datasets to (1) supplement existing datasets for ERC classification, and (2) analyze the effects of label imbalance in ERC. Our experimental results indicate that ERC classifier models trained on the generated datasets exhibit strong robustness and consistently achieve statistically significant performance improvements on existing ERC benchmarks.
Similar Papers
Do LLMs Feel? Teaching Emotion Recognition with Prompts, Retrieval, and Curriculum Learning
Artificial Intelligence
Helps computers understand feelings in conversations.
When Large Language Models are Reliable for Judging Empathic Communication
Computation and Language
Computers understand feelings better than people.
In-Context Examples Matter: Improving Emotion Recognition in Conversation with Instruction Tuning
Computation and Language
Helps computers understand feelings in talking.