Score: 2

CMRAG: Co-modality-based document retrieval and visual question answering

Published: September 2, 2025 | arXiv ID: 2509.02123v1

By: Wang Chen , Guanqiang Qi , Weikang Li and more

BigTech Affiliations: Baidu

Potential Business Impact:

Helps computers understand pictures and words together.

Business Areas:
Augmented Reality Hardware, Software

Retrieval-Augmented Generation (RAG) has become a core paradigm in document question answering tasks. However, existing methods have limitations when dealing with multimodal documents: one category of methods relies on layout analysis and text extraction, which can only utilize explicit text information and struggle to capture images or unstructured content; the other category treats document segmentation as visual input and directly passes it to visual language models (VLMs) for processing, yet it ignores the semantic advantages of text, leading to suboptimal generation results. This paper proposes co-modality-based RAG (CMRAG), which can simultaneously leverage text and images for efficient retrieval and generation. Specifically, we first perform structured parsing on documents to obtain co-modality representations of text segments and image regions. Subsequently, in response to user queries, we retrieve candidate evidence from text and image channels, respectively, and aggregate the results at the cross-modal retrieval level. Finally, we prompt the VLM to generate the final response based on the co-modality retrieval results. Experiments demonstrate that our method significantly outperforms pure-vision-based RAG in visual document question answering tasks. The findings of this paper show that integrating co-modality information into the RAG framework in a unified manner is an effective approach to improving the performance of complex document visual question-answering (VQA) systems.

Country of Origin
🇨🇳 🇭🇰 China, Hong Kong

Page Count
14 pages

Category
Computer Science:
Computation and Language