Social Bias in Multilingual Language Models: A Survey
By: Lance Calvin Lim Gamboa, Yue Feng, Mark Lee
Potential Business Impact:
Fixes computer language bias across cultures.
Pretrained multilingual models exhibit the same social bias as models processing English texts. This systematic review analyzes emerging research that extends bias evaluation and mitigation approaches into multilingual and non-English contexts. We examine these studies with respect to linguistic diversity, cultural awareness, and their choice of evaluation metrics and mitigation techniques. Our survey illuminates gaps in the field's dominant methodological design choices (e.g., preference for certain languages, scarcity of multilingual mitigation experiments) while cataloging common issues encountered and solutions implemented in adapting bias benchmarks across languages and cultures. Drawing from the implications of our findings, we chart directions for future research that can reinforce the multilingual bias literature's inclusivity, cross-cultural appropriateness, and alignment with state-of-the-art NLP advancements.
Similar Papers
Social Bias in Multilingual Language Models: A Survey
Computation and Language
Fixes computer language bias for everyone.
Bias Beyond English: Evaluating Social Bias and Debiasing Methods in a Low-Resource Setting
Computation and Language
Makes AI fairer for languages with less data.
Bias in, Bias out: Annotation Bias in Multilingual Large Language Models
Computation and Language
Fixes unfairness in AI language learning.