Score: 2

TiCAL:Typicality-Based Consistency-Aware Learning for Multimodal Emotion Recognition

Published: November 19, 2025 | arXiv ID: 2511.15085v1

By: Wen Yin , Siyu Zhan , Cencen Liu and more

Potential Business Impact:

Helps computers understand feelings better, even when they disagree.

Business Areas:
Image Recognition Data and Analytics, Software

Multimodal Emotion Recognition (MER) aims to accurately identify human emotional states by integrating heterogeneous modalities such as visual, auditory, and textual data. Existing approaches predominantly rely on unified emotion labels to supervise model training, often overlooking a critical challenge: inter-modal emotion conflicts, wherein different modalities within the same sample may express divergent emotional tendencies. In this work, we address this overlooked issue by proposing a novel framework, Typicality-based Consistent-aware Multimodal Emotion Recognition (TiCAL), inspired by the stage-wise nature of human emotion perception. TiCAL dynamically assesses the consistency of each training sample by leveraging pseudo unimodal emotion labels alongside a typicality estimation. To further enhance emotion representation, we embed features in a hyperbolic space, enabling the capture of fine-grained distinctions among emotional categories. By incorporating consistency estimates into the learning process, our method improves model performance, particularly on samples exhibiting high modality inconsistency. Extensive experiments on benchmark datasets, e.g, CMU-MOSEI and MER2023, validate the effectiveness of TiCAL in mitigating inter-modal emotional conflicts and enhancing overall recognition accuracy, e.g., with about 2.6% improvements over the state-of-the-art DMD.

Country of Origin
🇨🇳 🇦🇺 China, Australia

Page Count
9 pages

Category
Computer Science:
CV and Pattern Recognition