On The Conceptualization and Societal Impact of Cross-Cultural Bias
By: Vitthal Bhandari
Potential Business Impact:
Helps AI understand different cultures better.
Research has shown that while large language models (LLMs) can generate their responses based on cultural context, they are not perfect and tend to generalize across cultures. However, when evaluating the cultural bias of a language technology on any dataset, researchers may choose not to engage with stakeholders actually using that technology in real life, which evades the very fundamental problem they set out to address. Inspired by the work done by arXiv:2005.14050v2, I set out to analyse recent literature about identifying and evaluating cultural bias in Natural Language Processing (NLP). I picked out 20 papers published in 2025 about cultural bias and came up with a set of observations to allow NLP researchers in the future to conceptualize bias concretely and evaluate its harms effectively. My aim is to advocate for a robust assessment of the societal impact of language technologies exhibiting cross-cultural bias.
Similar Papers
Cross-Language Bias Examination in Large Language Models
Computers and Society
Finds and fixes unfairness in computer language.
Social Bias in Multilingual Language Models: A Survey
Computation and Language
Fixes computer language bias across cultures.
Social Bias in Multilingual Language Models: A Survey
Computation and Language
Fixes computer language bias for everyone.