Beyond Appearance: Transformer-based Person Identification from Conversational Dynamics
By: Masoumeh Chapariniya , Teodora Vukovic , Sarah Ebling and more
Potential Business Impact:
Identifies people by how they move and stand.
This paper investigates the performance of transformer-based architectures for person identification in natural, face-to-face conversation scenario. We implement and evaluate a two-stream framework that separately models spatial configurations and temporal motion patterns of 133 COCO WholeBody keypoints, extracted from a subset of the CANDOR conversational corpus. Our experiments compare pre-trained and from-scratch training, investigate the use of velocity features, and introduce a multi-scale temporal transformer for hierarchical motion modeling. Results demonstrate that domain-specific training significantly outperforms transfer learning, and that spatial configurations carry more discriminative information than temporal dynamics. The spatial transformer achieves 95.74% accuracy, while the multi-scale temporal transformer achieves 93.90%. Feature-level fusion pushes performance to 98.03%, confirming that postural and dynamic information are complementary. These findings highlight the potential of transformer architectures for person identification in natural interactions and provide insights for future multimodal and cross-cultural studies.
Similar Papers
Multi-Grained Feature Pruning for Video-Based Human Pose Estimation
CV and Pattern Recognition
Makes computer movement tracking faster and more accurate.
A Framework Combining 3D CNN and Transformer for Video-Based Behavior Recognition
CV and Pattern Recognition
Helps computers understand actions in videos better.
Investigating Identity Signals in Conversational Facial Dynamics via Disentangled Expression Features
CV and Pattern Recognition
Recognizes people by how their faces move.