Score: 0

Roles of MLLMs in Visually Rich Document Retrieval for RAG: A Survey

Published: December 16, 2025 | arXiv ID: 2601.03262v1

By: Xiantao Zhang

Potential Business Impact:

Helps computers understand pictures and text together.

Business Areas:
Visual Search Internet Services

Visually rich documents (VRDs) challenge retrieval-augmented generation (RAG) with layout-dependent semantics, brittle OCR, and evidence spread across complex figures and structured tables. This survey examines how Multimodal Large Language Models (MLLMs) are being used to make VRD retrieval practical for RAG. We organize the literature into three roles: Modality-Unifying Captioners, Multimodal Embedders, and End-to-End Representers. We compare these roles along retrieval granularity, information fidelity, latency and index size, and compatibility with reranking and grounding. We also outline key trade-offs and offer some practical guidance on when to favor each role. Finally, we identify promising directions for future research, including adaptive retrieval units, model size reduction, and the development of evaluation methods.

Country of Origin
🇨🇳 China

Page Count
18 pages

Category
Computer Science:
Information Retrieval