Context-Aware Network Based on Multi-scale Spatio-temporal Attention for Action Recognition in Videos
By: Xiaoyang Li , Wenzhu Yang , Kanglin Wang and more
Action recognition is a critical task in video understanding, requiring the comprehensive capture of spatio-temporal cues across various scales. However, existing methods often overlook the multi-granularity nature of actions. To address this limitation, we introduce the Context-Aware Network (CAN). CAN consists of two core modules: the Multi-scale Temporal Cue Module (MTCM) and the Group Spatial Cue Module (GSCM). MTCM effectively extracts temporal cues at multiple scales, capturing both fast-changing motion details and overall action flow. GSCM, on the other hand, extracts spatial cues at different scales by grouping feature maps and applying specialized extraction methods to each group. Experiments conducted on five benchmark datasets (Something-Something V1 and V2, Diving48, Kinetics-400, and UCF101) demonstrate the effectiveness of CAN. Our approach achieves competitive performance, outperforming most mainstream methods, with accuracies of 50.4% on Something-Something V1, 63.9% on Something-Something V2, 88.4% on Diving48, 74.9% on Kinetics-400, and 86.9% on UCF101. These results highlight the importance of capturing multi-scale spatio-temporal cues for robust action recognition.
Similar Papers
MGCA-Net: Multi-Grained Category-Aware Network for Open-Vocabulary Temporal Action Localization
CV and Pattern Recognition
Finds any action in videos, even new ones.
Action Anticipation at a Glimpse: To What Extent Can Multimodal Cues Replace Video?
CV and Pattern Recognition
Predicts what happens next from just one picture.
Track and Caption Any Motion: Query-Free Motion Discovery and Description in Videos
CV and Pattern Recognition
Lets computers describe what's happening in videos.