Multimodal Iterative RAG for Knowledge Visual Question Answering
By: Changin Choi , Wonseok Lee , Jungmin Ko and more
Potential Business Impact:
Helps computers answer harder questions using more information.
While Multimodal Large Language Models (MLLMs) have significantly advanced multimodal understanding, their performance remains limited on knowledge-intensive visual questions that require external knowledge beyond the image. Retrieval-Augmented Generation (RAG) has become a promising solution for providing models with external knowledge, its conventional single-pass framework often fails to gather sufficient knowledge. To overcome this limitation, we propose MI-RAG, a Multimodal Iterative RAG framework that leverages reasoning to enhance retrieval and update reasoning over newly retrieved knowledge across modalities. At each iteration, MI-RAG leverages an accumulated reasoning record to dynamically formulate a multi-query. These queries then drive a joint search across heterogeneous knowledge bases containing both visually-grounded and textual knowledge. The newly acquired knowledge is synthesized into the reasoning record, progressively refining understanding across iterations. Experiments on challenging benchmarks, including Encyclopedic VQA, InfoSeek, and OK-VQA, show that MI-RAG significantly improves both retrieval recall and answer accuracy, establishing a scalable approach for compositional reasoning in knowledge-intensive VQA.
Similar Papers
mKG-RAG: Multimodal Knowledge Graph-Enhanced RAG for Visual Question Answering
CV and Pattern Recognition
Helps computers answer questions about pictures better.
MMKB-RAG: A Multi-Modal Knowledge-Based Retrieval-Augmented Generation Framework
Artificial Intelligence
Helps AI find better, truer answers.
OMGM: Orchestrate Multiple Granularities and Modalities for Efficient Multimodal Retrieval
Information Retrieval
Helps computers answer questions about pictures.