Do You Know About My Nation? Investigating Multilingual Language Models' Cultural Literacy Through Factual Knowledge
By: Eshaan Tanwar , Anwoy Chatterjee , Michael Saxon and more
Potential Business Impact:
Helps computers understand facts from many countries.
Most multilingual question-answering benchmarks, while covering a diverse pool of languages, do not factor in regional diversity in the information they capture and tend to be Western-centric. This introduces a significant gap in fairly evaluating multilingual models' comprehension of factual information from diverse geographical locations. To address this, we introduce XNationQA for investigating the cultural literacy of multilingual LLMs. XNationQA encompasses a total of 49,280 questions on the geography, culture, and history of nine countries, presented in seven languages. We benchmark eight standard multilingual LLMs on XNationQA and evaluate them using two novel transference metrics. Our analyses uncover a considerable discrepancy in the models' accessibility to culturally specific facts across languages. Notably, we often find that a model demonstrates greater knowledge of cultural information in English than in the dominant language of the respective culture. The models exhibit better performance in Western languages, although this does not necessarily translate to being more literate for Western countries, which is counterintuitive. Furthermore, we observe that models have a very limited ability to transfer knowledge across languages, particularly evident in open-source models.
Similar Papers
XLQA: A Benchmark for Locale-Aware Multilingual Open-Domain Question Answering
Computation and Language
Tests AI on questions with different cultures.
NativQA Framework: Enabling LLMs with Native, Local, and Everyday Knowledge
Computation and Language
Builds smart computer answers for any language.
From Facts to Folklore: Evaluating Large Language Models on Bengali Cultural Knowledge
Computation and Language
Helps computers understand Bengali culture better.