Score: 2

How do datasets, developers, and models affect biases in a low-resourced language?

Published: June 7, 2025 | arXiv ID: 2506.06816v1

By: Dipto Das, Shion Guha, Bryan Semaan

Potential Business Impact:

Fixes computer bias against certain groups.

Business Areas:
Biometrics Biotechnology, Data and Analytics, Science and Engineering

Sociotechnical systems, such as language technologies, frequently exhibit identity-based biases. These biases exacerbate the experiences of historically marginalized communities and remain understudied in low-resource contexts. While models and datasets specific to a language or with multilingual support are commonly recommended to address these biases, this paper empirically tests the effectiveness of such approaches in the context of gender, religion, and nationality-based identities in Bengali, a widely spoken but low-resourced language. We conducted an algorithmic audit of sentiment analysis models built on mBERT and BanglaBERT, which were fine-tuned using all Bengali sentiment analysis (BSA) datasets from Google Dataset Search. Our analyses showed that BSA models exhibit biases across different identity categories despite having similar semantic content and structure. We also examined the inconsistencies and uncertainties arising from combining pre-trained models and datasets created by individuals from diverse demographic backgrounds. We connected these findings to the broader discussions on epistemic injustice, AI alignment, and methodological decisions in algorithmic audits.

Country of Origin
πŸ‡¨πŸ‡¦ πŸ‡ΊπŸ‡Έ Canada, United States

Repos / Data Links

Page Count
17 pages

Category
Computer Science:
Computation and Language