Clothes-Changing Person Re-identification Based On Skeleton Dynamics
By: Asaf Joseph, Shmuel Peleg
Potential Business Impact:
Find people even when they change clothes.
Clothes-Changing Person Re-Identification (ReID) aims to recognize the same individual across different videos captured at various times and locations. This task is particularly challenging due to changes in appearance, such as clothing, hairstyle, and accessories. We propose a Clothes-Changing ReID method that uses only skeleton data and does not use appearance features. Traditional ReID methods often depend on appearance features, leading to decreased accuracy when clothing changes. Our approach utilizes a spatio-temporal Graph Convolution Network (GCN) encoder to generate a skeleton-based descriptor for each individual. During testing, we improve accuracy by aggregating predictions from multiple segments of a video clip. Evaluated on the CCVID dataset with several different pose estimation models, our method achieves state-of-the-art performance, offering a robust and efficient solution for Clothes-Changing ReID.
Similar Papers
Skeletons Speak Louder than Text: A Motion-Aware Pretraining Paradigm for Video-Based Person Re-Identification
CV and Pattern Recognition
Helps computers recognize people in videos by their movement.
Contextualized Multimodal Lifelong Person Re-Identification in Hybrid Clothing States
CV and Pattern Recognition
Helps cameras recognize people even with different clothes.
TSDW: A Tri-Stream Dynamic Weight Network for Cloth-Changing Person Re-Identification
CV and Pattern Recognition
Finds people even when they change clothes.