DepthGait: Multi-Scale Cross-Level Feature Fusion of RGB-Derived Depth and Silhouette Sequences for Robust Gait Recognition
By: Xinzhu Li , Juepeng Zheng , Yikun Chen and more
Potential Business Impact:
Helps computers tell people apart by how they walk.
Robust gait recognition requires highly discriminative representations, which are closely tied to input modalities. While binary silhouettes and skeletons have dominated recent literature, these 2D representations fall short of capturing sufficient cues that can be exploited to handle viewpoint variations, and capture finer and meaningful details of gait. In this paper, we introduce a novel framework, termed DepthGait, that incorporates RGB-derived depth maps and silhouettes for enhanced gait recognition. Specifically, apart from the 2D silhouette representation of the human body, the proposed pipeline explicitly estimates depth maps from a given RGB image sequence and uses them as a new modality to capture discriminative features inherent in human locomotion. In addition, a novel multi-scale and cross-level fusion scheme has also been developed to bridge the modality gap between depth maps and silhouettes. Extensive experiments on standard benchmarks demonstrate that the proposed DepthGait achieves state-of-the-art performance compared to peer methods and attains an impressive mean rank-1 accuracy on the challenging datasets.
Similar Papers
Explainable Parkinsons Disease Gait Recognition Using Multimodal RGB-D Fusion and Large Language Models
CV and Pattern Recognition
Helps doctors spot Parkinson's by watching how people walk.
RobustGait: Robustness Analysis for Appearance Based Gait Recognition
CV and Pattern Recognition
Helps computers recognize people by how they walk.
DINOv2 Driven Gait Representation Learning for Video-Based Visible-Infrared Person Re-identification
CV and Pattern Recognition
Find people in videos using their walk.