Score: 0

Emotion Recognition in Multi-Speaker Conversations through Speaker Identification, Knowledge Distillation, and Hierarchical Fusion

Published: November 5, 2025 | arXiv ID: 2511.13731v1

By: Xiao Li, Kotaro Funakoshi, Manabu Okumura

Potential Business Impact:

Helps computers understand emotions in group talks.

Business Areas:
Speech Recognition Data and Analytics, Software

Emotion recognition in multi-speaker conversations faces significant challenges due to speaker ambiguity and severe class imbalance. We propose a novel framework that addresses these issues through three key innovations: (1) a speaker identification module that leverages audio-visual synchronization to accurately identify the active speaker, (2) a knowledge distillation strategy that transfers superior textual emotion understanding to audio and visual modalities, and (3) hierarchical attention fusion with composite loss functions to handle class imbalance. Comprehensive evaluations on MELD and IEMOCAP datasets demonstrate superior performance, achieving 67.75% and 72.44% weighted F1 scores respectively, with particularly notable improvements on minority emotion classes.

Page Count
14 pages

Category
Computer Science:
Sound