Towards an Effective Action-Region Tracking Framework for Fine-grained Video Action Recognition
By: Baoli Sun , Yihan Wang , Xinzhu Ma and more
Potential Business Impact:
Helps computers tell apart very similar actions.
Fine-grained action recognition (FGAR) aims to identify subtle and distinctive differences among fine-grained action categories. However, current recognition methods often capture coarse-grained motion patterns but struggle to identify subtle details in local regions evolving over time. In this work, we introduce the Action-Region Tracking (ART) framework, a novel solution leveraging a query-response mechanism to discover and track the dynamics of distinctive local details, enabling effective distinction of similar actions. Specifically, we propose a region-specific semantic activation module that employs discriminative and text-constrained semantics as queries to capture the most action-related region responses in each video frame, facilitating interaction among spatial and temporal dimensions with corresponding video features. The captured region responses are organized into action tracklets, which characterize region-based action dynamics by linking related responses across video frames in a coherent sequence. The text-constrained queries encode nuanced semantic representations derived from textual descriptions of action labels extracted by language branches within Visual Language Models (VLMs). To optimize the action tracklets, we design a multi-level tracklet contrastive constraint among region responses at spatial and temporal levels, enabling effective discrimination within each frame and correlation between adjacent frames. Additionally, a task-specific fine-tuning mechanism refines textual semantics such that semantic representations encoded by VLMs are preserved while optimized for task preferences. Comprehensive experiments on widely used action recognition benchmarks demonstrate the superiority to previous state-of-the-art baselines.
Similar Papers
Efficient Spatial-Temporal Modeling for Real-Time Video Analysis: A Unified Framework for Action Recognition and Object Tracking
CV and Pattern Recognition
Lets computers understand fast video actions better.
Generative Model-Based Feature Attention Module for Video Action Analysis
CV and Pattern Recognition
Helps computers understand what's happening in videos.
Grasp Any Region: Towards Precise, Contextual Pixel Understanding for Multimodal LLMs
CV and Pattern Recognition
Lets computers understand any part of a picture.