Track and Caption Any Motion: Query-Free Motion Discovery and Description in Videos
By: Bishoy Galoaa, Sarah Ostadabbas
Potential Business Impact:
Lets computers describe what's happening in videos.
We propose Track and Caption Any Motion (TCAM), a motion-centric framework for automatic video understanding that discovers and describes motion patterns without user queries. Understanding videos in challenging conditions like occlusion, camouflage, or rapid movement often depends more on motion dynamics than static appearance. TCAM autonomously observes a video, identifies multiple motion activities, and spatially grounds each natural language description to its corresponding trajectory through a motion-field attention mechanism. Our key insight is that motion patterns, when aligned with contrastive vision-language representations, provide powerful semantic signals for recognizing and describing actions. Through unified training that combines global video-text alignment with fine-grained spatial correspondence, TCAM enables query-free discovery of multiple motion expressions via multi-head cross-attention. On the MeViS benchmark, TCAM achieves 58.4% video-to-text retrieval, 64.9 JF for spatial grounding, and discovers 4.8 relevant expressions per video with 84.7% precision, demonstrating strong cross-task generalization.
Similar Papers
Dense Motion Captioning
CV and Pattern Recognition
Helps computers understand and describe human movements.
Explicit Temporal-Semantic Modeling for Dense Video Captioning via Context-Aware Cross-Modal Interaction
CV and Pattern Recognition
Helps computers describe what happens in videos.
PostCam: Camera-Controllable Novel-View Video Generation with Query-Shared Cross-Attention
CV and Pattern Recognition
Changes camera views in videos after they are filmed.