Knowledge-based Visual Question Answer with Multimodal Processing, Retrieval and Filtering
By: Yuyang Hong , Jiaqi Gu , Qi Yang and more
Potential Business Impact:
Helps computers answer questions using pictures and facts.
Knowledge-based visual question answering (KB-VQA) requires visual language models (VLMs) to integrate visual understanding with external knowledge retrieval. Although retrieval-augmented generation (RAG) achieves significant advances in this task by combining knowledge-base querying, it still struggles with the quality of multimodal queries and the relevance of retrieved results. To overcome these challenges, we propose a novel three-stage method, termed Wiki-PRF, including Processing, Retrieval and Filtering stages. The processing stage dynamically invokes visual tools to extract precise multimodal information for retrieval. The retrieval stage integrates visual and text features to achieve multimodal knowledge retrieval. The filtering stage performs relevance filtering and concentration on retrieval results. To this end, we introduce a visual language model trained with answer accuracy and format consistency as reward signals via a reinforcement learning manner. This enhances the model's reasoning, tool invocation for accurate queries, and filtering of irrelevant content. Experiments on benchmark datasets (E-VQA and InfoSeek) show significant improvements~(36.0 and 42.8) in answer quality, achieving state-of-the-art performance. Code is available at https://github.com/cqu-student/Wiki-PRF
Similar Papers
mKG-RAG: Multimodal Knowledge Graph-Enhanced RAG for Visual Question Answering
CV and Pattern Recognition
Helps computers answer questions about pictures better.
ReAG: Reasoning-Augmented Generation for Knowledge-based Visual Question Answering
CV and Pattern Recognition
Helps AI answer hard questions using extra facts.
OMGM: Orchestrate Multiple Granularities and Modalities for Efficient Multimodal Retrieval
Information Retrieval
Helps computers answer questions about pictures.