Score: 0

Rethinking Multimodal Sentiment Analysis: A High-Accuracy, Simplified Fusion Architecture

Published: May 5, 2025 | arXiv ID: 2505.04642v1

By: Nischal Mandal, Yang Li

Potential Business Impact:

Helps computers understand feelings from talking, seeing, and hearing.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Multimodal sentiment analysis, a pivotal task in affective computing, seeks to understand human emotions by integrating cues from language, audio, and visual signals. While many recent approaches leverage complex attention mechanisms and hierarchical architectures, we propose a lightweight, yet effective fusion-based deep learning model tailored for utterance-level emotion classification. Using the benchmark IEMOCAP dataset, which includes aligned text, audio-derived numeric features, and visual descriptors, we design a modality-specific encoder using fully connected layers followed by dropout regularization. The modality-specific representations are then fused using simple concatenation and passed through a dense fusion layer to capture cross-modal interactions. This streamlined architecture avoids computational overhead while preserving performance, achieving a classification accuracy of 92% across six emotion categories. Our approach demonstrates that with careful feature engineering and modular design, simpler fusion strategies can outperform or match more complex models, particularly in resource-constrained environments.

Country of Origin
🇬🇧 United Kingdom

Page Count
8 pages

Category
Computer Science:
Computation and Language