Score: 1

Measuring South Asian Biases in Large Language Models

Published: May 24, 2025 | arXiv ID: 2505.18466v1

By: Mamnuya Rinki , Chahat Raj , Anjishnu Mukherjee and more

Potential Business Impact:

Finds hidden biases in AI for different cultures.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Evaluations of Large Language Models (LLMs) often overlook intersectional and culturally specific biases, particularly in underrepresented multilingual regions like South Asia. This work addresses these gaps by conducting a multilingual and intersectional analysis of LLM outputs across 10 Indo-Aryan and Dravidian languages, identifying how cultural stigmas influenced by purdah and patriarchy are reinforced in generative tasks. We construct a culturally grounded bias lexicon capturing previously unexplored intersectional dimensions including gender, religion, marital status, and number of children. We use our lexicon to quantify intersectional bias and the effectiveness of self-debiasing in open-ended generations (e.g., storytelling, hobbies, and to-do lists), where bias manifests subtly and remains largely unexamined in multilingual contexts. Finally, we evaluate two self-debiasing strategies (simple and complex prompts) to measure their effectiveness in reducing culturally specific bias in Indo-Aryan and Dravidian languages. Our approach offers a nuanced lens into cultural bias by introducing a novel bias lexicon and evaluation framework that extends beyond Eurocentric or small-scale multilingual settings.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
169 pages

Category
Computer Science:
Computation and Language