Score: 1

Multi-Track Multimodal Learning on iMiGUE: Micro-Gesture and Emotion Recognition

Published: December 29, 2025 | arXiv ID: 2512.23291v1

By: Arman Martirosyan , Shahane Tigranyan , Maria Razzhivina and more

Potential Business Impact:

Lets computers understand your feelings and tiny movements.

Business Areas:
Image Recognition Data and Analytics, Software

Micro-gesture recognition and behavior-based emotion prediction are both highly challenging tasks that require modeling subtle, fine-grained human behaviors, primarily leveraging video and skeletal pose data. In this work, we present two multimodal frameworks designed to tackle both problems on the iMiGUE dataset. For micro-gesture classification, we explore the complementary strengths of RGB and 3D pose-based representations to capture nuanced spatio-temporal patterns. To comprehensively represent gestures, video, and skeletal embeddings are extracted using MViTv2-S and 2s-AGCN, respectively. Then, they are integrated through a Cross-Modal Token Fusion module to combine spatial and pose information. For emotion recognition, our framework extends to behavior-based emotion prediction, a binary classification task identifying emotional states based on visual cues. We leverage facial and contextual embeddings extracted using SwinFace and MViTv2-S models and fuse them through an InterFusion module designed to capture emotional expressions and body gestures. Experiments conducted on the iMiGUE dataset, within the scope of the MiGA 2025 Challenge, demonstrate the robust performance and accuracy of our method in the behavior-based emotion prediction task, where our approach secured 2nd place.

Country of Origin
🇦🇲 🇷🇺 Armenia, Russian Federation

Page Count
8 pages

Category
Computer Science:
CV and Pattern Recognition