Score: 1

Speech Emotion Recognition via Entropy-Aware Score Selection

Published: August 28, 2025 | arXiv ID: 2508.20796v1

By: ChenYi Chua , JunKai Wong , Chengxin Chen and more

Potential Business Impact:

Helps computers understand how people feel from talking.

Business Areas:
Speech Recognition Data and Analytics, Software

In this paper, we propose a multimodal framework for speech emotion recognition that leverages entropy-aware score selection to combine speech and textual predictions. The proposed method integrates a primary pipeline that consists of an acoustic model based on wav2vec2.0 and a secondary pipeline that consists of a sentiment analysis model using RoBERTa-XLM, with transcriptions generated via Whisper-large-v3. We propose a late score fusion approach based on entropy and varentropy thresholds to overcome the confidence constraints of primary pipeline predictions. A sentiment mapping strategy translates three sentiment categories into four target emotion classes, enabling coherent integration of multimodal predictions. The results on the IEMOCAP and MSP-IMPROV datasets show that the proposed method offers a practical and reliable enhancement over traditional single-modality systems.

Country of Origin
πŸ‡ΈπŸ‡¬ Singapore

Repos / Data Links

Page Count
6 pages

Category
Computer Science:
Sound