On the Contribution of Lexical Features to Speech Emotion Recognition
By: David Combei
Potential Business Impact:
Lets computers understand feelings from words spoken.
Although paralinguistic cues are often considered the primary drivers of speech emotion recognition (SER), we investigate the role of lexical content extracted from speech and show that it can achieve competitive and in some cases higher performance compared to acoustic models. On the MELD dataset, our lexical-based approach obtains a weighted F1-score (WF1) of 51.5%, compared to 49.3% for an acoustic-only pipeline with a larger parameter count. Furthermore, we analyze different self-supervised (SSL) speech and text representations, conduct a layer-wise study of transformer-based encoders, and evaluate the effect of audio denoising.
Similar Papers
Do Audio LLMs Really LISTEN, or Just Transcribe? Measuring Lexical vs. Acoustic Emotion Cues Reliance
Computation and Language
Computers hear words, not feelings in voices.
Enhancing Speech Emotion Recognition with Multi-Task Learning and Dynamic Feature Fusion
Sound
Helps computers understand feelings in voices better.
Layer-wise Analysis for Quality of Multilingual Synthesized Speech
Audio and Speech Processing
Makes computer voices sound more human-like.