Stereotype Detection as a Catalyst for Enhanced Bias Detection: A Multi-Task Learning Approach
By: Aditya Tomar, Rudra Murthy, Pushpak Bhattacharyya
Potential Business Impact:
Makes AI fairer by understanding bias and stereotypes.
Bias and stereotypes in language models can cause harm, especially in sensitive areas like content moderation and decision-making. This paper addresses bias and stereotype detection by exploring how jointly learning these tasks enhances model performance. We introduce StereoBias, a unique dataset labeled for bias and stereotype detection across five categories: religion, gender, socio-economic status, race, profession, and others, enabling a deeper study of their relationship. Our experiments compare encoder-only models and fine-tuned decoder-only models using QLoRA. While encoder-only models perform well, decoder-only models also show competitive results. Crucially, joint training on bias and stereotype detection significantly improves bias detection compared to training them separately. Additional experiments with sentiment analysis confirm that the improvements stem from the connection between bias and stereotypes, not multi-task learning alone. These findings highlight the value of leveraging stereotype information to build fairer and more effective AI systems.
Similar Papers
StereoDetect: Detecting Stereotypes and Anti-stereotypes the Correct Way Using Social Psychological Underpinnings
Computation and Language
Helps computers spot harmful stereotypes and biases.
Stereotype Detection in Natural Language Processing
Computation and Language
Finds bias to stop hate speech early.
Are Stereotypes Leading LLMs' Zero-Shot Stance Detection ?
Computation and Language
Helps computers judge opinions fairly.