Multi-Teacher Language-Aware Knowledge Distillation for Multilingual Speech Emotion Recognition
By: Mehedi Hasan Bijoy , Dejan Porjazovski , Tamás Grósz and more
Potential Business Impact:
Lets computers understand emotions in many languages.
Speech Emotion Recognition (SER) is crucial for improving human-computer interaction. Despite strides in monolingual SER, extending them to build a multilingual system remains challenging. Our goal is to train a single model capable of multilingual SER by distilling knowledge from multiple teacher models. To address this, we introduce a novel language-aware multi-teacher knowledge distillation method to advance SER in English, Finnish, and French. It leverages Wav2Vec2.0 as the foundation of monolingual teacher models and then distills their knowledge into a single multilingual student model. The student model demonstrates state-of-the-art performance, with a weighted recall of 72.9 on the English dataset and an unweighted recall of 63.4 on the Finnish dataset, surpassing fine-tuning and knowledge distillation baselines. Our method excels in improving recall for sad and neutral emotions, although it still faces challenges in recognizing anger and happiness.
Similar Papers
M4SER: Multimodal, Multirepresentation, Multitask, and Multistrategy Learning for Speech Emotion Recognition
Human-Computer Interaction
Helps computers understand feelings from voices better.
MERaLiON-SER: Robust Speech Emotion Recognition Model for English and SEA Languages
Sound
Lets computers understand emotions in voices.
MERaLiON-SER: Robust Speech Emotion Recognition Model for English and SEA Languages
Sound
Lets computers understand emotions in voices.