MMGraphRAG: Bridging Vision and Language with Interpretable Multimodal Knowledge Graphs
By: Xueyao Wan, Hang Yu
Potential Business Impact:
Helps computers understand pictures and words together better.
Retrieval-Augmented Generation (RAG) enhances language model generation by retrieving relevant information from external knowledge bases. However, conventional RAG methods face the issue of missing multimodal information. Multimodal RAG methods address this by fusing images and text through mapping them into a shared embedding space, but they fail to capture the structure of knowledge and logical chains between modalities. Moreover, they also require large-scale training for specific tasks, resulting in limited generalizing ability. To address these limitations, we propose MMGraphRAG, which refines visual content through scene graphs and constructs a multimodal knowledge graph (MMKG) in conjunction with text-based KG. It employs spectral clustering to achieve cross-modal entity linking and retrieves context along reasoning paths to guide the generative process. Experimental results show that MMGraphRAG achieves state-of-the-art performance on the DocBench and MMLongBench datasets, demonstrating strong domain adaptability and clear reasoning paths.
Similar Papers
M$^3$KG-RAG: Multi-hop Multimodal Knowledge Graph-enhanced Retrieval-Augmented Generation
Computation and Language
Helps AI understand videos and sounds better.
M$^3$KG-RAG: Multi-hop Multimodal Knowledge Graph-enhanced Retrieval-Augmented Generation
Computation and Language
Helps computers understand videos and sounds better.
mKG-RAG: Multimodal Knowledge Graph-Enhanced RAG for Visual Question Answering
CV and Pattern Recognition
Helps computers answer questions about pictures better.