VTD-CLIP: Video-to-Text Discretization via Prompting CLIP
By: Wencheng Zhu , Yuexin Wang , Hongxuan Li and more
Potential Business Impact:
Helps computers understand videos by turning them into words.
Vision-language models bridge visual and linguistic understanding and have proven to be powerful for video recognition tasks. Existing approaches primarily rely on parameter-efficient fine-tuning of image-text pre-trained models, yet they often suffer from limited interpretability and poor generalization due to inadequate temporal modeling. To address these, we propose a simple yet effective video-to-text discretization framework. Our method repurposes the frozen text encoder to construct a visual codebook from video class labels due to the many-to-one contrastive alignment between visual and textual embeddings in multimodal pretraining. This codebook effectively transforms temporal visual data into textual tokens via feature lookups and offers interpretable video representations through explicit video modeling. Then, to enhance robustness against irrelevant or noisy frames, we introduce a confidence-aware fusion module that dynamically weights keyframes by assessing their semantic relevance via the codebook. Furthermore, our method incorporates learnable text prompts to conduct adaptive codebook updates. Extensive experiments on HMDB-51, UCF-101, SSv2, and Kinetics-400 have validated the superiority of our approach, achieving more competitive improvements over state-of-the-art methods. The code will be publicly available at https://github.com/isxinxin/VTD-CLIP.
Similar Papers
Generalizable Prompt Learning of CLIP: A Brief Overview
CV and Pattern Recognition
Teaches computers to understand pictures and words.
Is Temporal Prompting All We Need For Limited Labeled Action Recognition?
CV and Pattern Recognition
Teaches computers to understand videos without lots of labels.
STOP: Integrated Spatial-Temporal Dynamic Prompting for Video Understanding
CV and Pattern Recognition
Helps computers understand videos by focusing on important parts.