UniSTFormer: Unified Spatio-Temporal Lightweight Transformer for Efficient Skeleton-Based Action Recognition
By: Wenhan Wu , Zhishuai Guo , Chen Chen and more
Potential Business Impact:
Makes computers understand human movements better, faster.
Skeleton-based action recognition (SAR) has achieved impressive progress with transformer architectures. However, existing methods often rely on complex module compositions and heavy designs, leading to increased parameter counts, high computational costs, and limited scalability. In this paper, we propose a unified spatio-temporal lightweight transformer framework that integrates spatial and temporal modeling within a single attention module, eliminating the need for separate temporal modeling blocks. This approach reduces redundant computations while preserving temporal awareness within the spatial modeling process. Furthermore, we introduce a simplified multi-scale pooling fusion module that combines local and global pooling pathways to enhance the model's ability to capture fine-grained local movements and overarching global motion patterns. Extensive experiments on benchmark datasets demonstrate that our lightweight model achieves a superior balance between accuracy and efficiency, reducing parameter complexity by over 58% and lowering computational cost by over 60% compared to state-of-the-art transformer-based baselines, while maintaining competitive recognition performance.
Similar Papers
UniSTD: Towards Unified Spatio-Temporal Learning across Diverse Disciplines
CV and Pattern Recognition
One model learns many video tasks at once.
Foundation Model for Skeleton-Based Human Action Understanding
CV and Pattern Recognition
Teaches robots to understand human movements better.
USTM: Unified Spatial and Temporal Modeling for Continuous Sign Language Recognition
CV and Pattern Recognition
Helps computers understand sign language from videos.