Score: 1

Multi-Teacher Language-Aware Knowledge Distillation for Multilingual Speech Emotion Recognition

Published: June 10, 2025 | arXiv ID: 2506.08717v1

By: Mehedi Hasan Bijoy , Dejan Porjazovski , Tamás Grósz and more

Potential Business Impact:

Lets computers understand emotions in many languages.

Business Areas:
Speech Recognition Data and Analytics, Software

Speech Emotion Recognition (SER) is crucial for improving human-computer interaction. Despite strides in monolingual SER, extending them to build a multilingual system remains challenging. Our goal is to train a single model capable of multilingual SER by distilling knowledge from multiple teacher models. To address this, we introduce a novel language-aware multi-teacher knowledge distillation method to advance SER in English, Finnish, and French. It leverages Wav2Vec2.0 as the foundation of monolingual teacher models and then distills their knowledge into a single multilingual student model. The student model demonstrates state-of-the-art performance, with a weighted recall of 72.9 on the English dataset and an unweighted recall of 63.4 on the Finnish dataset, surpassing fine-tuning and knowledge distillation baselines. Our method excels in improving recall for sad and neutral emotions, although it still faces challenges in recognizing anger and happiness.

Country of Origin
🇫🇮 Finland

Page Count
5 pages

Category
Computer Science:
Computation and Language