STAR-Pose: Efficient Low-Resolution Video Human Pose Estimation via Spatial-Temporal Adaptive Super-Resolution
By: Yucheng Jin , Jinyan Chen , Ziyue He and more
Potential Business Impact:
Makes blurry people-tracking videos clear for computers.
Human pose estimation in low-resolution videos presents a fundamental challenge in computer vision. Conventional methods either assume high-quality inputs or employ computationally expensive cascaded processing, which limits their deployment in resource-constrained environments. We propose STAR-Pose, a spatial-temporal adaptive super-resolution framework specifically designed for video-based human pose estimation. Our method features a novel spatial-temporal Transformer with LeakyReLU-modified linear attention, which efficiently captures long-range temporal dependencies. Moreover, it is complemented by an adaptive fusion module that integrates parallel CNN branch for local texture enhancement. We also design a pose-aware compound loss to achieve task-oriented super-resolution. This loss guides the network to reconstruct structural features that are most beneficial for keypoint localization, rather than optimizing purely for visual quality. Extensive experiments on several mainstream video HPE datasets demonstrate that STAR-Pose outperforms existing approaches. It achieves up to 5.2% mAP improvement under extremely low-resolution (64x48) conditions while delivering 2.8x to 4.4x faster inference than cascaded approaches.
Similar Papers
StarPose: 3D Human Pose Estimation via Spatial-Temporal Autoregressive Diffusion
CV and Pattern Recognition
Makes computer-drawn people move realistically.
StarPose: 3D Human Pose Estimation via Spatial-Temporal Autoregressive Diffusion
CV and Pattern Recognition
Makes 3D motion tracking smoother and more accurate.
Multi-Grained Feature Pruning for Video-Based Human Pose Estimation
CV and Pattern Recognition
Makes computer movement tracking faster and more accurate.