Score: 0

Enhancing Subsequent Video Retrieval via Vision-Language Models (VLMs)

Published: March 21, 2025 | arXiv ID: 2503.17415v1

By: Yicheng Duan, Xi Huang, Duo Chen

Potential Business Impact:

Find videos faster by understanding their stories.

Business Areas:
Image Recognition Data and Analytics, Software

The rapid growth of video content demands efficient and precise retrieval systems. While vision-language models (VLMs) excel in representation learning, they often struggle with adaptive, time-sensitive video retrieval. This paper introduces a novel framework that combines vector similarity search with graph-based data structures. By leveraging VLM embeddings for initial retrieval and modeling contextual relationships among video segments, our approach enables adaptive query refinement and improves retrieval accuracy. Experiments demonstrate its precision, scalability, and robustness, offering an effective solution for interactive video retrieval in dynamic environments.

Country of Origin
🇺🇸 United States

Page Count
6 pages

Category
Computer Science:
CV and Pattern Recognition