FastV-RAG: Towards Fast and Fine-Grained Video QA with Retrieval-Augmented Generation
By: Gen Li, Peiyu Liu
Potential Business Impact:
Makes AI understand videos and answer questions faster.
Vision-Language Models (VLMs) excel at visual reasoning but still struggle with integrating external knowledge. Retrieval-Augmented Generation (RAG) is a promising solution, but current methods remain inefficient and often fail to maintain high answer quality. To address these challenges, we propose VideoSpeculateRAG, an efficient VLM-based RAG framework built on two key ideas. First, we introduce a speculative decoding pipeline: a lightweight draft model quickly generates multiple answer candidates, which are then verified and refined by a more accurate heavyweight model, substantially reducing inference latency without sacrificing correctness. Second, we identify a major source of error - incorrect entity recognition in retrieved knowledge - and mitigate it with a simple yet effective similarity-based filtering strategy that improves entity alignment and boosts overall answer accuracy. Experiments demonstrate that VideoSpeculateRAG achieves comparable or higher accuracy than standard RAG approaches while accelerating inference by approximately 2x. Our framework highlights the potential of combining speculative decoding with retrieval-augmented reasoning to enhance efficiency and reliability in complex, knowledge-intensive multimodal tasks.
Similar Papers
VideoRAG: Retrieval-Augmented Generation over Video Corpus
CV and Pattern Recognition
Helps computers understand videos to answer questions.
E-VRAG: Enhancing Long Video Understanding with Resource-Efficient Retrieval Augmented Generation
CV and Pattern Recognition
Makes computers understand long videos faster and better.
QualiRAG: Retrieval-Augmented Generation for Visual Quality Understanding
CV and Pattern Recognition
Helps computers judge picture quality without training.