MM-Gesture: Towards Precise Micro-Gesture Recognition through Multimodal Fusion
By: Jihao Gu , Fei Wang , Kun Li and more
Potential Business Impact:
Recognizes tiny hand movements from many video types.
In this paper, we present MM-Gesture, the solution developed by our team HFUT-VUT, which ranked 1st in the micro-gesture classification track of the 3rd MiGA Challenge at IJCAI 2025, achieving superior performance compared to previous state-of-the-art methods. MM-Gesture is a multimodal fusion framework designed specifically for recognizing subtle and short-duration micro-gestures (MGs), integrating complementary cues from joint, limb, RGB video, Taylor-series video, optical-flow video, and depth video modalities. Utilizing PoseConv3D and Video Swin Transformer architectures with a novel modality-weighted ensemble strategy, our method further enhances RGB modality performance through transfer learning pre-trained on the larger MA-52 dataset. Extensive experiments on the iMiGUE benchmark, including ablation studies across different modalities, validate the effectiveness of our proposed approach, achieving a top-1 accuracy of 73.213%. Code is available at: https://github.com/momiji-bit/MM-Gesture.
Similar Papers
Towards Fine-Grained Emotion Understanding via Skeleton-Based Micro-Gesture Recognition
CV and Pattern Recognition
Reads tiny hand movements to guess hidden feelings.
Multi-Track Multimodal Learning on iMiGUE: Micro-Gesture and Emotion Recognition
CV and Pattern Recognition
Lets computers understand your feelings and tiny movements.
Online Micro-gesture Recognition Using Data Augmentation and Spatial-Temporal Attention
CV and Pattern Recognition
Lets computers see tiny hand movements in videos.