Language over Content: Tracing Cultural Understanding in Multilingual Large Language Models
By: Seungho Cho , Changgeon Ko , Eui Jun Hwang and more
Potential Business Impact:
Shows how computers understand different cultures.
Large language models (LLMs) are increasingly used across diverse cultural contexts, making accurate cultural understanding essential. Prior evaluations have mostly focused on output-level performance, obscuring the factors that drive differences in responses, while studies using circuit analysis have covered few languages and rarely focused on culture. In this work, we trace LLMs' internal cultural understanding mechanisms by measuring activation path overlaps when answering semantically equivalent questions under two conditions: varying the target country while fixing the question language, and varying the question language while fixing the country. We also use same-language country pairs to disentangle language from cultural aspects. Results show that internal paths overlap more for same-language, cross-country questions than for cross-language, same-country questions, indicating strong language-specific patterns. Notably, the South Korea-North Korea pair exhibits low overlap and high variability, showing that linguistic similarity does not guarantee aligned internal representation.
Similar Papers
Do Large Language Models Truly Understand Cross-cultural Differences?
Computation and Language
Tests if computers understand different cultures.
Localized Cultural Knowledge is Conserved and Controllable in Large Language Models
Computation and Language
Makes computers speak other languages like locals.
Grounding Multilingual Multimodal LLMs With Cultural Knowledge
Computation and Language
Helps computers understand different cultures worldwide.