Grounding Multilingual Multimodal LLMs With Cultural Knowledge
By: Jean de Dieu Nyandwi , Yueqi Song , Simran Khanuja and more
Potential Business Impact:
Helps computers understand different cultures worldwide.
Multimodal Large Language Models excel in high-resource settings, but often misinterpret long-tail cultural entities and underperform in low-resource languages. To address this gap, we propose a data-centric approach that directly grounds MLLMs in cultural knowledge. Leveraging a large scale knowledge graph from Wikidata, we collect images that represent culturally significant entities, and generate synthetic multilingual visual question answering data. The resulting dataset, CulturalGround, comprises 22 million high-quality, culturally-rich VQA pairs spanning 42 countries and 39 languages. We train an open-source MLLM CulturalPangea on CulturalGround, interleaving standard multilingual instruction-tuning data to preserve general abilities. CulturalPangea achieves state-of-the-art performance among open models on various culture-focused multilingual multimodal benchmarks, outperforming prior models by an average of 5.0 without degrading results on mainstream vision-language tasks. Our findings show that our targeted, culturally grounded approach could substantially narrow the cultural gap in MLLMs and offer a practical path towards globally inclusive multimodal systems.
Similar Papers
Grounding Multilingual Multimodal LLMs With Cultural Knowledge
Computation and Language
Helps AI understand different cultures and languages better.
MELLA: Bridging Linguistic Capability and Cultural Groundedness for Low-Resource Language MLLMs
CV and Pattern Recognition
Helps computers understand languages and cultures better.
Towards Geo-Culturally Grounded LLM Generations
Computation and Language
Helps computers understand different cultures better.