Score: 0

Language over Content: Tracing Cultural Understanding in Multilingual Large Language Models

Published: October 18, 2025 | arXiv ID: 2510.16565v1

By: Seungho Cho , Changgeon Ko , Eui Jun Hwang and more

Potential Business Impact:

Shows how computers understand different cultures.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) are increasingly used across diverse cultural contexts, making accurate cultural understanding essential. Prior evaluations have mostly focused on output-level performance, obscuring the factors that drive differences in responses, while studies using circuit analysis have covered few languages and rarely focused on culture. In this work, we trace LLMs' internal cultural understanding mechanisms by measuring activation path overlaps when answering semantically equivalent questions under two conditions: varying the target country while fixing the question language, and varying the question language while fixing the country. We also use same-language country pairs to disentangle language from cultural aspects. Results show that internal paths overlap more for same-language, cross-country questions than for cross-language, same-country questions, indicating strong language-specific patterns. Notably, the South Korea-North Korea pair exhibits low overlap and high variability, showing that linguistic similarity does not guarantee aligned internal representation.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
6 pages

Category
Computer Science:
Computation and Language