Score: 2

Why We Feel: Breaking Boundaries in Emotional Reasoning with Multimodal Large Language Models

Published: April 10, 2025 | arXiv ID: 2504.07521v2

By: Yuxiang Lin , Jingdong Sun , Zhi-Qi Cheng and more

BigTech Affiliations: University of Washington

Potential Business Impact:

Helps AI understand *why* people feel emotions.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Most existing emotion analysis emphasizes which emotion arises (e.g., happy, sad, angry) but neglects the deeper why. We propose Emotion Interpretation (EI), focusing on causal factors-whether explicit (e.g., observable objects, interpersonal interactions) or implicit (e.g., cultural context, off-screen events)-that drive emotional responses. Unlike traditional emotion recognition, EI tasks require reasoning about triggers instead of mere labeling. To facilitate EI research, we present EIBench, a large-scale benchmark encompassing 1,615 basic EI samples and 50 complex EI samples featuring multifaceted emotions. Each instance demands rationale-based explanations rather than straightforward categorization. We further propose a Coarse-to-Fine Self-Ask (CFSA) annotation pipeline, which guides Vision-Language Models (VLLMs) through iterative question-answer rounds to yield high-quality labels at scale. Extensive evaluations on open-source and proprietary large language models under four experimental settings reveal consistent performance gaps-especially for more intricate scenarios-underscoring EI's potential to enrich empathetic, context-aware AI applications. Our benchmark and methods are publicly available at: https://github.com/Lum1104/EIBench, offering a foundation for advanced multimodal causal analysis and next-generation affective computing.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
21 pages

Category
Computer Science:
Artificial Intelligence