Multimodal Skeleton-Based Action Representation Learning via Decomposition and Composition
By: Hongsong Wang , Heng Fei , Bingxuan Dai and more
Multimodal human action understanding is a significant problem in computer vision, with the central challenge being the effective utilization of the complementarity among diverse modalities while maintaining model efficiency. However, most existing methods rely on simple late fusion to enhance performance, which results in substantial computational overhead. Although early fusion with a shared backbone for all modalities is efficient, it struggles to achieve excellent performance. To address the dilemma of balancing efficiency and effectiveness, we introduce a self-supervised multimodal skeleton-based action representation learning framework, named Decomposition and Composition. The Decomposition strategy meticulously decomposes the fused multimodal features into distinct unimodal features, subsequently aligning them with their respective ground truth unimodal counterparts. On the other hand, the Composition strategy integrates multiple unimodal features, leveraging them as self-supervised guidance to enhance the learning of multimodal representations. Extensive experiments on the NTU RGB+D 60, NTU RGB+D 120, and PKU-MMD II datasets demonstrate that the proposed method strikes an excellent balance between computational cost and model performance.
Similar Papers
Towards Adaptive Fusion of Multimodal Deep Networks for Human Action Recognition
CV and Pattern Recognition
Lets computers understand actions by watching, listening, and feeling.
Foundation Model for Skeleton-Based Human Action Understanding
CV and Pattern Recognition
Teaches robots to understand human movements better.
Heterogeneous Skeleton-Based Action Representation Learning
CV and Pattern Recognition
Helps robots understand actions from different body shapes.