TSkel-Mamba: Temporal Dynamic Modeling via State Space Model for Human Skeleton-based Action Recognition
By: Yanan Liu , Jun Liu , Hao Zhang and more
Potential Business Impact:
Helps computers understand human movements better.
Skeleton-based action recognition has garnered significant attention in the computer vision community. Inspired by the recent success of the selective state-space model (SSM) Mamba in modeling 1D temporal sequences, we propose TSkel-Mamba, a hybrid Transformer-Mamba framework that effectively captures both spatial and temporal dynamics. In particular, our approach leverages Spatial Transformer for spatial feature learning while utilizing Mamba for temporal modeling. Mamba, however, employs separate SSM blocks for individual channels, which inherently limits its ability to model inter-channel dependencies. To better adapt Mamba for skeleton data and enhance Mamba`s ability to model temporal dependencies, we introduce a Temporal Dynamic Modeling (TDM) block, which is a versatile plug-and-play component that integrates a novel Multi-scale Temporal Interaction (MTI) module. The MTI module employs multi-scale Cycle operators to capture cross-channel temporal interactions, a critical factor in action recognition. Extensive experiments on NTU-RGB+D 60, NTU-RGB+D 120, NW-UCLA and UAV-Human datasets demonstrate that TSkel-Mamba achieves state-of-the-art performance while maintaining low inference time, making it both efficient and highly effective.
Similar Papers
Learning Human Motion with Temporally Conditional Mamba
CV and Pattern Recognition
Makes computer-made people move like real humans.
InterMamba: Efficient Human-Human Interaction Generation with Adaptive Spatio-Temporal Mamba
CV and Pattern Recognition
Makes computer-made people move together realistically.
SasMamba: A Lightweight Structure-Aware Stride State Space Model for 3D Human Pose Estimation
CV and Pattern Recognition
Helps computers understand human body movements better.