Enhancing Subsequent Video Retrieval via Vision-Language Models (VLMs)
By: Yicheng Duan, Xi Huang, Duo Chen
Potential Business Impact:
Find videos faster by understanding their stories.
The rapid growth of video content demands efficient and precise retrieval systems. While vision-language models (VLMs) excel in representation learning, they often struggle with adaptive, time-sensitive video retrieval. This paper introduces a novel framework that combines vector similarity search with graph-based data structures. By leveraging VLM embeddings for initial retrieval and modeling contextual relationships among video segments, our approach enables adaptive query refinement and improves retrieval accuracy. Experiments demonstrate its precision, scalability, and robustness, offering an effective solution for interactive video retrieval in dynamic environments.
Similar Papers
A Survey on Efficient Vision-Language Models
CV and Pattern Recognition
Makes smart AI work on small, slow devices.
Semantic-Clipping: Efficient Vision-Language Modeling with Semantic-Guidedd Visual Selection
CV and Pattern Recognition
Helps computers understand pictures better by focusing on important parts.
Summarization of Multimodal Presentations with Vision-Language Models: Study of the Effect of Modalities and Structure
CV and Pattern Recognition
Helps computers summarize videos and text together.