Score: 3

M4-RAG: A Massive-Scale Multilingual Multi-Cultural Multimodal RAG

Published: December 5, 2025 | arXiv ID: 2512.05959v1

By: David Anugraha , Patrick Amadeus Irawan , Anshul Singh and more

BigTech Affiliations: Stanford University

Potential Business Impact:

Helps computers answer questions about pictures in many languages.

Business Areas:
Augmented Reality Hardware, Software

Vision-language models (VLMs) have achieved strong performance in visual question answering (VQA), yet they remain constrained by static training data. Retrieval-Augmented Generation (RAG) mitigates this limitation by enabling access to up-to-date, culturally grounded, and multilingual information; however, multilingual multimodal RAG remains largely underexplored. We introduce M4-RAG, a massive-scale benchmark covering 42 languages and 56 regional dialects and registers, comprising over 80,000 culturally diverse image-question pairs for evaluating retrieval-augmented VQA across languages and modalities. To balance realism with reproducibility, we build a controlled retrieval environment containing millions of carefully curated multilingual documents relevant to the query domains, approximating real-world retrieval conditions while ensuring consistent experimentation. Our systematic evaluation reveals that although RAG consistently benefits smaller VLMs, it fails to scale to larger models and often even degrades their performance, exposing a critical mismatch between model size and current retrieval effectiveness. M4-RAG provides a foundation for advancing next-generation RAG systems capable of reasoning seamlessly across languages, modalities, and cultural contexts.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
18 pages

Category
Computer Science:
Computation and Language