Attention-Aware Multi-View Pedestrian Tracking
By: Reef Alturki, Adrian Hilton, Jean-Yves Guillemaut
Potential Business Impact:
Tracks people better even when they hide.
In spite of the recent advancements in multi-object tracking, occlusion poses a significant challenge. Multi-camera setups have been used to address this challenge by providing a comprehensive coverage of the scene. Recent multi-view pedestrian detection models have highlighted the potential of an early-fusion strategy, projecting feature maps of all views to a common ground plane or the Bird's Eye View (BEV), and then performing detection. This strategy has been shown to improve both detection and tracking performance. However, the perspective transformation results in significant distortion on the ground plane, affecting the robustness of the appearance features of the pedestrians. To tackle this limitation, we propose a novel model that incorporates attention mechanisms in a multi-view pedestrian tracking scenario. Our model utilizes an early-fusion strategy for detection, and a cross-attention mechanism to establish robust associations between pedestrians in different frames, while efficiently propagating pedestrian features across frames, resulting in a more robust feature representation for each pedestrian. Extensive experiments demonstrate that our model outperforms state-of-the-art models, with an IDF1 score of $96.1\%$ on Wildtrack dataset, and $85.7\%$ on MultiviewX dataset.
Similar Papers
Multi-View Industrial Anomaly Detection with Epipolar Constrained Cross-View Fusion
CV and Pattern Recognition
Finds factory flaws using many camera views.
Enhanced Multi-View Pedestrian Detection Using Probabilistic Occupancy Volume
CV and Pattern Recognition
Helps self-driving cars spot hidden people better.
DINO-CoDT: Multi-class Collaborative Detection and Tracking with Vision Foundation Models
CV and Pattern Recognition
Helps cars see and track all road users.