Score: 2

Improving Speech Emotion Recognition Through Cross Modal Attention Alignment and Balanced Stacking Model

Published: May 26, 2025 | arXiv ID: 2505.20007v2

By: Lucas Ueda , João Lima , Leonardo Marques and more

Potential Business Impact:

Helps computers understand how people feel when they talk.

Business Areas:
Speech Recognition Data and Analytics, Software

Emotion plays a fundamental role in human interaction, and therefore systems capable of identifying emotions in speech are crucial in the context of human-computer interaction. Speech emotion recognition (SER) is a challenging problem, particularly in natural speech and when the available data is imbalanced across emotions. This paper presents our proposed system in the context of the 2025 Speech Emotion Recognition in Naturalistic Conditions Challenge. Our proposed architecture leverages cross-modality, utilizing cross-modal attention to fuse representations from different modalities. To address class imbalance, we employed two training designs: (i) weighted crossentropy loss (WCE); and (ii) WCE with an additional neutralexpressive soft margin loss and balancing. We trained a total of 12 multimodal models, which were ensembled using a balanced stacking model. Our proposed system achieves a MacroF1 score of 0.4094 and an accuracy of 0.4128 on 8-class speech emotion recognition.

Country of Origin
🇧🇷 Brazil

Repos / Data Links

Page Count
5 pages

Category
Electrical Engineering and Systems Science:
Audio and Speech Processing