A Survey on MLLM-based Visually Rich Document Understanding: Methods, Challenges, and Emerging Trends
By: Yihao Ding , Siwen Luo , Yue Dai and more
Potential Business Impact:
Helps computers understand pictures with words.
Visually-Rich Document Understanding (VRDU) has emerged as a critical field, driven by the need to automatically process documents containing complex visual, textual, and layout information. Recently, Multimodal Large Language Models (MLLMs) have shown remarkable potential in this domain, leveraging both Optical Character Recognition (OCR)-dependent and OCR-free frameworks to extract and interpret information in document images. This survey reviews recent advancements in MLLM-based VRDU, highlighting three core components: (1) methods for encoding and fusing textual, visual, and layout features; (2) training paradigms, including pretraining strategies, instruction-response tuning, and the trainability of different model modules; and (3) datasets utilized for pretraining, instruction-tuning, and supervised fine-tuning. Finally, we discuss the challenges and opportunities in this evolving field and propose future directions to advance the efficiency, generalizability, and robustness of VRDU systems.
Similar Papers
VRD-IU: Lessons from Visually Rich Document Intelligence and Understanding
CV and Pattern Recognition
Helps computers understand messy forms and papers.
Roles of MLLMs in Visually Rich Document Retrieval for RAG: A Survey
Information Retrieval
Helps computers understand pictures and text together.
Survey on Question Answering over Visually Rich Documents: Methods, Challenges, and Trends
Computation and Language
Helps computers understand pictures with words.