Score: 1

Cross-Language Bias Examination in Large Language Models

Published: December 17, 2025 | arXiv ID: 2512.16029v1

By: Yuxuan Liang, Marwa Mahmoud

Potential Business Impact:

Finds and fixes unfairness in computer language.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This study introduces an innovative multilingual bias evaluation framework for assessing bias in Large Language Models, combining explicit bias assessment through the BBQ benchmark with implicit bias measurement using a prompt-based Implicit Association Test. By translating the prompts and word list into five target languages, English, Chinese, Arabic, French, and Spanish, we directly compare different types of bias across languages. The results reveal substantial gaps in bias across languages used in LLMs. For example, Arabic and Spanish consistently show higher levels of stereotype bias, while Chinese and English exhibit lower levels of bias. We also identify contrasting patterns across bias types. Age shows the lowest explicit bias but the highest implicit bias, emphasizing the importance of detecting implicit biases that are undetectable with standard benchmarks. These findings indicate that LLMs vary significantly across languages and bias dimensions. This study fills a key research gap by providing a comprehensive methodology for cross-lingual bias analysis. Ultimately, our work establishes a foundation for the development of equitable multilingual LLMs, ensuring fairness and effectiveness across diverse languages and cultures.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡¬πŸ‡§ United States, United Kingdom

Page Count
10 pages

Category
Computer Science:
Computers and Society