Score: 1

Grounding Multilingual Multimodal LLMs With Cultural Knowledge

Published: August 10, 2025 | arXiv ID: 2508.07414v1

By: Jean de Dieu Nyandwi , Yueqi Song , Simran Khanuja and more

Potential Business Impact:

Helps AI understand different cultures and languages better.

Multimodal Large Language Models excel in high-resource settings, but often misinterpret long-tail cultural entities and underperform in low-resource languages. To address this gap, we propose a data-centric approach that directly grounds MLLMs in cultural knowledge. Leveraging a large scale knowledge graph from Wikidata, we collect images that represent culturally significant entities, and generate synthetic multilingual visual question answering data. The resulting dataset, CulturalGround, comprises 22 million high-quality, culturally-rich VQA pairs spanning 42 countries and 39 languages. We train an open-source MLLM CulturalPangea on CulturalGround, interleaving standard multilingual instruction-tuning data to preserve general abilities. CulturalPangea achieves state-of-the-art performance among open models on various culture-focused multilingual multimodal benchmarks, outperforming prior models by an average of 5.0 without degrading results on mainstream vision-language tasks. Our findings show that our targeted, culturally grounded approach could substantially narrow the cultural gap in MLLMs and offer a practical path towards globally inclusive multimodal systems.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
45 pages

Category
Computer Science:
Computation and Language