Explainable speech emotion recognition through attentive pooling: insights from attention-based temporal localization
By: Tahitoa Leygue , Astrid Sabourin , Christian Bolzmacher and more
Potential Business Impact:
Helps computers understand emotions in voices better.
State-of-the-art transformer models for Speech Emotion Recognition (SER) rely on temporal feature aggregation, yet advanced pooling methods remain underexplored. We systematically benchmark pooling strategies, including Multi-Query Multi-Head Attentive Statistics Pooling, which achieves a 3.5 percentage point macro F1 gain over average pooling. Attention analysis shows 15 percent of frames capture 80 percent of emotion cues, revealing a localized pattern of emotional information. Analysis of high-attention frames reveals that non-linguistic vocalizations and hyperarticulated phonemes are disproportionately prioritized during pooling, mirroring human perceptual strategies. Our findings position attentive pooling as both a performant SER mechanism and a biologically plausible tool for explainable emotion localization. On Interspeech 2025 Speech Emotion Recognition in Naturalistic Conditions Challenge, our approach obtained a macro F1 score of 0.3649.
Similar Papers
Enhancing Speech Emotion Recognition with Graph-Based Multimodal Fusion and Prosodic Features for the Speech Emotion Recognition in Naturalistic Conditions Challenge at Interspeech 2025
Sound
Helps computers understand emotions in spoken words.
An Efficient Transfer Learning Method Based on Adapter with Local Attributes for Speech Emotion Recognition
Sound
Helps computers understand feelings in voices better.
Beyond saliency: enhancing explanation of speech emotion recognition with expert-referenced acoustic cues
Machine Learning (CS)
Shows why computers understand emotions in voices.