PakBBQ: A Culturally Adapted Bias Benchmark for QA
By: Abdullah Hashmat, Muhammad Arham Mirza, Agha Ali Raza
Potential Business Impact:
Makes AI fairer for people speaking different languages.
With the widespread adoption of Large Language Models (LLMs) across various applications, it is empirical to ensure their fairness across all user communities. However, most LLMs are trained and evaluated on Western centric data, with little attention paid to low-resource languages and regional contexts. To address this gap, we introduce PakBBQ, a culturally and regionally adapted extension of the original Bias Benchmark for Question Answering (BBQ) dataset. PakBBQ comprises over 214 templates, 17180 QA pairs across 8 categories in both English and Urdu, covering eight bias dimensions including age, disability, appearance, gender, socio-economic status, religious, regional affiliation, and language formality that are relevant in Pakistan. We evaluate multiple multilingual LLMs under both ambiguous and explicitly disambiguated contexts, as well as negative versus non negative question framings. Our experiments reveal (i) an average accuracy gain of 12\% with disambiguation, (ii) consistently stronger counter bias behaviors in Urdu than in English, and (iii) marked framing effects that reduce stereotypical responses when questions are posed negatively. These findings highlight the importance of contextualized benchmarks and simple prompt engineering strategies for bias mitigation in low resource settings.
Similar Papers
BharatBBQ: A Multilingual Bias Benchmark for Question Answering in the Indian Context
Computation and Language
Tests AI for unfairness in Indian languages.
PBBQ: A Persian Bias Benchmark Dataset Curated with Human-AI Collaboration for Large Language Models
Computation and Language
Helps computers understand Persian culture without bias.
VoiceBBQ: Investigating Effect of Content and Acoustics in Social Bias of Spoken Language Model
Computation and Language
Tests how AI voices show unfairness.