Enhancing Document VQA Models via Retrieval-Augmented Generation
By: Eric López, Artemis Llabrés, Ernest Valveny
Potential Business Impact:
Helps computers answer questions from long documents.
Document Visual Question Answering (Document VQA) must cope with documents that span dozens of pages, yet leading systems still concatenate every page or rely on very large vision-language models, both of which are memory-hungry. Retrieval-Augmented Generation (RAG) offers an attractive alternative, first retrieving a concise set of relevant segments before generating answers from this selected evidence. In this paper, we systematically evaluate the impact of incorporating RAG into Document VQA through different retrieval variants - text-based retrieval using OCR tokens and purely visual retrieval without OCR - across multiple models and benchmarks. Evaluated on the multi-page datasets MP-DocVQA, DUDE, and InfographicVQA, the text-centric variant improves the "concatenate-all-pages" baseline by up to +22.5 ANLS, while the visual variant achieves +5.0 ANLS improvement without requiring any text extraction. An ablation confirms that retrieval and reranking components drive most of the gain, whereas the layout-guided chunking strategy - proposed in several recent works to leverage page structure - fails to help on these datasets. Our experiments demonstrate that careful evidence selection consistently boosts accuracy across multiple model sizes and multi-page benchmarks, underscoring its practical value for real-world Document VQA.
Similar Papers
Enhancing Document VQA Models via Retrieval-Augmented Generation
CV and Pattern Recognition
Helps computers understand long documents faster.
VDocRAG: Retrieval-Augmented Generation over Visually-Rich Documents
Computation and Language
Helps computers understand pictures and text in documents.
Retrieval Augmented Generation and Understanding in Vision: A Survey and New Outlook
CV and Pattern Recognition
Helps computers "see" and create pictures better.