Emotion Detection in Speech Using Lightweight and Transformer-Based Models: A Comparative and Ablation Study
By: Lucky Onyekwelu-Udoka, Md Shafiqul Islam, Md Shahedul Hasan
Potential Business Impact:
Lets computers understand your feelings from your voice.
Emotion recognition from speech plays a vital role in the development of empathetic human-computer interaction systems. This paper presents a comparative analysis of lightweight transformer-based models, DistilHuBERT and PaSST, by classifying six core emotions from the CREMA-D dataset. We benchmark their performance against a traditional CNN-LSTM baseline model using MFCC features. DistilHuBERT demonstrates superior accuracy (70.64%) and F1 score (70.36%) while maintaining an exceptionally small model size (0.02 MB), outperforming both PaSST and the baseline. Furthermore, we conducted an ablation study on three variants of the PaSST, Linear, MLP, and Attentive Pooling heads, to understand the effect of classification head architecture on model performance. Our results indicate that PaSST with an MLP head yields the best performance among its variants but still falls short of DistilHuBERT. Among the emotion classes, angry is consistently the most accurately detected, while disgust remains the most challenging. These findings suggest that lightweight transformers like DistilHuBERT offer a compelling solution for real-time speech emotion recognition on edge devices. The code is available at: https://github.com/luckymaduabuchi/Emotion-detection-.
Similar Papers
Transformer Redesign for Late Fusion of Audio-Text Features on Ultra-Low-Power Edge Hardware
Sound
Helps tiny computers understand feelings from voices.
Emotion Recognition in Multi-Speaker Conversations through Speaker Identification, Knowledge Distillation, and Hierarchical Fusion
Sound
Helps computers understand emotions in group talks.
DeepEmoNet: Building Machine Learning Models for Automatic Emotion Recognition in Human Speeches
Audio and Speech Processing
Helps computers understand feelings in voices.