IndRegBias: A Dataset for Studying Indian Regional Biases in English and Code-Mixed Social Media Comments
By: Debasmita Panda, Akash Anil, Neelesh Kumar Shukla
Potential Business Impact:
Helps computers spot unfairness about Indian places.
Warning: This paper consists of examples representing regional biases in Indian regions that might be offensive towards a particular region. While social biases corresponding to gender, race, socio-economic conditions, etc., have been extensively studied in the major applications of Natural Language Processing (NLP), biases corresponding to regions have garnered less attention. This is mainly because of (i) difficulty in the extraction of regional bias datasets, (ii) disagreements in annotation due to inherent human biases, and (iii) regional biases being studied in combination with other types of social biases and often being under-represented. This paper focuses on creating a dataset IndRegBias, consisting of regional biases in an Indian context reflected in users' comments on popular social media platforms, namely Reddit and YouTube. We carefully selected 25,000 comments appearing on various threads in Reddit and videos on YouTube discussing trending topics on regional issues in India. Furthermore, we propose a multilevel annotation strategy to annotate the comments describing the severity of regional biased statements. To detect the presence of regional bias and its severity in IndRegBias, we evaluate open-source Large Language Models (LLMs) and Indic Language Models (ILMs) using zero-shot, few-shot, and fine-tuning strategies. We observe that zero-shot and few-shot approaches show lower accuracy in detecting regional biases and severity in the majority of the LLMs and ILMs. However, the fine-tuning approach significantly enhances the performance of the LLM in detecting Indian regional bias along with its severity.
Similar Papers
Measuring South Asian Biases in Large Language Models
Computation and Language
Finds hidden biases in AI for different cultures.
BharatBBQ: A Multilingual Bias Benchmark for Question Answering in the Indian Context
Computation and Language
Tests AI for unfairness in Indian languages.
Bias Beyond English: Evaluating Social Bias and Debiasing Methods in a Low-Resource Setting
Computation and Language
Makes AI fairer for languages with less data.