Score: 2

ReAG: Reasoning-Augmented Generation for Knowledge-based Visual Question Answering

Published: November 27, 2025 | arXiv ID: 2511.22715v1

By: Alberto Compagnoni , Marco Morini , Sara Sarto and more

Potential Business Impact:

Helps AI answer hard questions using extra facts.

Business Areas:
Augmented Reality Hardware, Software

Multimodal Large Language Models (MLLMs) have shown impressive capabilities in jointly understanding text, images, and videos, often evaluated via Visual Question Answering (VQA). However, even state-of-the-art MLLMs struggle with domain-specific or knowledge-intensive queries, where relevant information is underrepresented in pre-training data. Knowledge-based VQA (KB-VQA) addresses this by retrieving external documents to condition answer generation, but current retrieval-augmented approaches suffer from low precision, noisy passages, and limited reasoning. To address this, we propose ReAG, a novel Reasoning-Augmented Multimodal RAG approach that combines coarse- and fine-grained retrieval with a critic model that filters irrelevant passages, ensuring high-quality additional context. The model follows a multi-stage training strategy leveraging reinforcement learning to enhance reasoning over retrieved content, while supervised fine-tuning serves only as a cold start. Extensive experiments on Encyclopedic-VQA and InfoSeek demonstrate that ReAG significantly outperforms prior methods, improving answer accuracy and providing interpretable reasoning grounded in retrieved evidence. Our source code is publicly available at: https://github.com/aimagelab/ReAG.

Repos / Data Links

Page Count
18 pages

Category
Computer Science:
CV and Pattern Recognition