M4-RAG: A Massive-Scale Multilingual Multi-Cultural Multimodal RAG
By: David Anugraha , Patrick Amadeus Irawan , Anshul Singh and more
Potential Business Impact:
Helps computers answer questions about pictures in many languages.
Vision-language models (VLMs) have achieved strong performance in visual question answering (VQA), yet they remain constrained by static training data. Retrieval-Augmented Generation (RAG) mitigates this limitation by enabling access to up-to-date, culturally grounded, and multilingual information; however, multilingual multimodal RAG remains largely underexplored. We introduce M4-RAG, a massive-scale benchmark covering 42 languages and 56 regional dialects and registers, comprising over 80,000 culturally diverse image-question pairs for evaluating retrieval-augmented VQA across languages and modalities. To balance realism with reproducibility, we build a controlled retrieval environment containing millions of carefully curated multilingual documents relevant to the query domains, approximating real-world retrieval conditions while ensuring consistent experimentation. Our systematic evaluation reveals that although RAG consistently benefits smaller VLMs, it fails to scale to larger models and often even degrades their performance, exposing a critical mismatch between model size and current retrieval effectiveness. M4-RAG provides a foundation for advancing next-generation RAG systems capable of reasoning seamlessly across languages, modalities, and cultural contexts.
Similar Papers
Multimodal Iterative RAG for Knowledge Visual Question Answering
CV and Pattern Recognition
Helps computers answer harder questions using more information.
Multilingual Retrieval-Augmented Generation for Knowledge-Intensive Task
Computation and Language
Helps computers answer questions in any language.
A Survey of Multimodal Retrieval-Augmented Generation
Information Retrieval
Lets computers understand pictures and words together.