Emotion-Qwen: A Unified Framework for Emotion and Vision Understanding
By: Dawei Huang , Qing Li , Chuan Yan and more
Potential Business Impact:
Helps computers understand feelings in videos.
Accurate emotion understanding in videos necessitates effectively recognizing and interpreting emotional states by integrating visual, textual, auditory, and contextual cues. Although recent Large Multimodal Models (LMMs) have exhibited significant progress in general vision-language (VL) tasks, their performance often deteriorates in emotion-specific scenarios, exhibiting catastrophic forgetting when fine-tuned on emotion-centric tasks. To overcome these limitations, we propose Emotion-Qwen, a unified multimodal framework designed to simultaneously enable robust emotion understanding and preserve general VL reasoning capabilities. Emotion-Qwen introduces a novel Hybrid Compressor based on a Mixture-of-Experts (MoE) architecture, dynamically routing inputs to optimally balance emotion-specific processing and general multimodal reasoning. We further propose a carefully structured three-stage pre-training pipeline, leveraging extensive general and emotion-focused datasets to strengthen multimodal representation robustness and model adaptability. Additionally, we develop the Video Emotion Reasoning (VER) dataset, a large-scale bilingual resource containing over 40K video clips annotated with detailed context-aware emotional descriptions, significantly facilitating research on fine-grained emotional reasoning. Extensive experiments confirm that Emotion-Qwen achieves state-of-the-art performance across multiple emotion recognition and reasoning benchmarks, while maintaining highly competitive results in general VL tasks.
Similar Papers
Qwen3-VL Technical Report
CV and Pattern Recognition
Lets computers understand pictures, text, and video together.
Qwen3-VL Technical Report
CV and Pattern Recognition
Understands pictures, text, and video together better.
A Unified Framework for Emotion Recognition and Sentiment Analysis via Expert-Guided Multimodal Fusion with Large Language Models
Computation and Language
**Computers understand feelings from talking, seeing, and writing.**