Is Temporal Prompting All We Need For Limited Labeled Action Recognition?
By: Shreyank N Gowda , Boyan Gao , Xiao Gu and more
Potential Business Impact:
Teaches computers to understand videos without lots of labels.
Video understanding has shown remarkable improvements in recent years, largely dependent on the availability of large scaled labeled datasets. Recent advancements in visual-language models, especially based on contrastive pretraining, have shown remarkable generalization in zero-shot tasks, helping to overcome this dependence on labeled datasets. Adaptations of such models for videos, typically involve modifying the architecture of vision-language models to cater to video data. However, this is not trivial, since such adaptations are mostly computationally intensive and struggle with temporal modeling. We present TP-CLIP, an adaptation of CLIP that leverages temporal visual prompting for temporal adaptation without modifying the core CLIP architecture. This preserves its generalization abilities. TP-CLIP efficiently integrates into the CLIP architecture, leveraging its pre-trained capabilities for video data. Extensive experiments across various datasets demonstrate its efficacy in zero-shot and few-shot learning, outperforming existing approaches with fewer parameters and computational efficiency. In particular, we use just 1/3 the GFLOPs and 1/28 the number of tuneable parameters in comparison to recent state-of-the-art and still outperform it by up to 15.8% depending on the task and dataset.
Similar Papers
Generalizable Prompt Learning of CLIP: A Brief Overview
CV and Pattern Recognition
Teaches computers to understand pictures and words.
STOP: Integrated Spatial-Temporal Dynamic Prompting for Video Understanding
CV and Pattern Recognition
Helps computers understand videos by focusing on important parts.
VTD-CLIP: Video-to-Text Discretization via Prompting CLIP
CV and Pattern Recognition
Helps computers understand videos by turning them into words.