Score: 1

Emotion Detection in Speech Using Lightweight and Transformer-Based Models: A Comparative and Ablation Study

Published: November 1, 2025 | arXiv ID: 2511.00402v1

By: Lucky Onyekwelu-Udoka, Md Shafiqul Islam, Md Shahedul Hasan

Potential Business Impact:

Lets computers understand your feelings from your voice.

Business Areas:
Speech Recognition Data and Analytics, Software

Emotion recognition from speech plays a vital role in the development of empathetic human-computer interaction systems. This paper presents a comparative analysis of lightweight transformer-based models, DistilHuBERT and PaSST, by classifying six core emotions from the CREMA-D dataset. We benchmark their performance against a traditional CNN-LSTM baseline model using MFCC features. DistilHuBERT demonstrates superior accuracy (70.64%) and F1 score (70.36%) while maintaining an exceptionally small model size (0.02 MB), outperforming both PaSST and the baseline. Furthermore, we conducted an ablation study on three variants of the PaSST, Linear, MLP, and Attentive Pooling heads, to understand the effect of classification head architecture on model performance. Our results indicate that PaSST with an MLP head yields the best performance among its variants but still falls short of DistilHuBERT. Among the emotion classes, angry is consistently the most accurately detected, while disgust remains the most challenging. These findings suggest that lightweight transformers like DistilHuBERT offer a compelling solution for real-time speech emotion recognition on edge devices. The code is available at: https://github.com/luckymaduabuchi/Emotion-detection-.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
6 pages

Category
Computer Science:
Sound