Score: 0

Scaling Beyond Context: A Survey of Multimodal Retrieval-Augmented Generation for Document Understanding

Published: October 17, 2025 | arXiv ID: 2510.15253v1

By: Sensen Gao , Shanshan Zhao , Xu Jiang and more

Potential Business Impact:

Helps computers understand all parts of documents.

Business Areas:
Augmented Reality Hardware, Software

Document understanding is critical for applications from financial analysis to scientific discovery. Current approaches, whether OCR-based pipelines feeding Large Language Models (LLMs) or native Multimodal LLMs (MLLMs), face key limitations: the former loses structural detail, while the latter struggles with context modeling. Retrieval-Augmented Generation (RAG) helps ground models in external data, but documents' multimodal nature, i.e., combining text, tables, charts, and layout, demands a more advanced paradigm: Multimodal RAG. This approach enables holistic retrieval and reasoning across all modalities, unlocking comprehensive document intelligence. Recognizing its importance, this paper presents a systematic survey of Multimodal RAG for document understanding. We propose a taxonomy based on domain, retrieval modality, and granularity, and review advances involving graph structures and agentic frameworks. We also summarize key datasets, benchmarks, and applications, and highlight open challenges in efficiency, fine-grained representation, and robustness, providing a roadmap for future progress in document AI.

Country of Origin
🇦🇺 Australia

Page Count
21 pages

Category
Computer Science:
Computation and Language