PBBQ: A Persian Bias Benchmark Dataset Curated with Human-AI Collaboration for Large Language Models
By: Farhan Farsi , Shayan Bali , Fatemeh Valeh and more
Potential Business Impact:
Helps computers understand Persian culture without bias.
With the increasing adoption of large language models (LLMs), ensuring their alignment with social norms has become a critical concern. While prior research has examined bias detection in various languages, there remains a significant gap in resources addressing social biases within Persian cultural contexts. In this work, we introduce PBBQ, a comprehensive benchmark dataset designed to evaluate social biases in Persian LLMs. Our benchmark, which encompasses 16 cultural categories, was developed through questionnaires completed by 250 diverse individuals across multiple demographics, in close collaboration with social science experts to ensure its validity. The resulting PBBQ dataset contains over 37,000 carefully curated questions, providing a foundation for the evaluation and mitigation of bias in Persian language models. We benchmark several open-source LLMs, a closed-source model, and Persian-specific fine-tuned models on PBBQ. Our findings reveal that current LLMs exhibit significant social biases across Persian culture. Additionally, by comparing model outputs to human responses, we observe that LLMs often replicate human bias patterns, highlighting the complex interplay between learned representations and cultural stereotypes.Upon acceptance of the paper, our PBBQ dataset will be publicly available for use in future work. Content warning: This paper contains unsafe content.
Similar Papers
PakBBQ: A Culturally Adapted Bias Benchmark for QA
Computation and Language
Makes AI fairer for people speaking different languages.
BharatBBQ: A Multilingual Bias Benchmark for Question Answering in the Indian Context
Computation and Language
Tests AI for unfairness in Indian languages.
VoiceBBQ: Investigating Effect of Content and Acoustics in Social Bias of Spoken Language Model
Computation and Language
Tests how AI voices show unfairness.