Score: 0

Do Large Language Models Understand Morality Across Cultures?

Published: July 28, 2025 | arXiv ID: 2507.21319v1

By: Hadi Mohammadi , Yasmeen F. S. S. Meijer , Efthymia Papadopoulou and more

Potential Business Impact:

Makes AI understand different cultures' morals better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Recent advancements in large language models (LLMs) have established them as powerful tools across numerous domains. However, persistent concerns about embedded biases, such as gender, racial, and cultural biases arising from their training data, raise significant questions about the ethical use and societal consequences of these technologies. This study investigates the extent to which LLMs capture cross-cultural differences and similarities in moral perspectives. Specifically, we examine whether LLM outputs align with patterns observed in international survey data on moral attitudes. To this end, we employ three complementary methods: (1) comparing variances in moral scores produced by models versus those reported in surveys, (2) conducting cluster alignment analyses to assess correspondence between country groupings derived from LLM outputs and survey data, and (3) directly probing models with comparative prompts using systematically chosen token pairs. Our results reveal that current LLMs often fail to reproduce the full spectrum of cross-cultural moral variation, tending to compress differences and exhibit low alignment with empirical survey patterns. These findings highlight a pressing need for more robust approaches to mitigate biases and improve cultural representativeness in LLMs. We conclude by discussing the implications for the responsible development and global deployment of LLMs, emphasizing fairness and ethical alignment.

Country of Origin
🇳🇱 Netherlands

Page Count
10 pages

Category
Computer Science:
Computation and Language