VideoGEM: Training-free Action Grounding in Videos
By: Felix Vogel , Walid Bousselham , Anna Kukleva and more
Potential Business Impact:
Finds actions in videos without extra training.
Vision-language foundation models have shown impressive capabilities across various zero-shot tasks, including training-free localization and grounding, primarily focusing on localizing objects in images. However, leveraging those capabilities to localize actions and events in videos is challenging, as actions have less physical outline and are usually described by higher-level concepts. In this work, we propose VideoGEM, the first training-free spatial action grounding method based on pretrained image- and video-language backbones. Namely, we adapt the self-self attention formulation of GEM to spatial activity grounding. We observe that high-level semantic concepts, such as actions, usually emerge in the higher layers of the image- and video-language models. We, therefore, propose a layer weighting in the self-attention path to prioritize higher layers. Additionally, we introduce a dynamic weighting method to automatically tune layer weights to capture each layer`s relevance to a specific prompt. Finally, we introduce a prompt decomposition, processing action, verb, and object prompts separately, resulting in a better spatial localization of actions. We evaluate the proposed approach on three image- and video-language backbones, CLIP, OpenCLIP, and ViCLIP, and on four video grounding datasets, V-HICO, DALY, YouCook-Interactions, and GroundingYouTube, showing that the proposed training-free approach is able to outperform current trained state-of-the-art approaches for spatial video grounding.
Similar Papers
Grounding-MD: Grounded Video-language Pre-training for Open-World Moment Detection
CV and Pattern Recognition
Finds any action in videos from any description.
Zero-Shot Open-Vocabulary Human Motion Grounding with Test-Time Training
CV and Pattern Recognition
Lets computers understand actions without being taught.
AntiGrounding: Lifting Robotic Actions into VLM Representation Space for Decision Making
Robotics
Robots learn new tasks without practice.