Stereotype Detection in Natural Language Processing
By: Alessandra Teresa Cignarella, Anastasia Giachanou, Els Lefever
Potential Business Impact:
Finds bias to stop hate speech early.
Stereotypes influence social perceptions and can escalate into discrimination and violence. While NLP research has extensively addressed gender bias and hate speech, stereotype detection remains an emerging field with significant societal implications. In this work is presented a survey of existing research, analyzing definitions from psychology, sociology, and philosophy. A semi-automatic literature review was performed by using Semantic Scholar. We retrieved and filtered over 6,000 papers (in the year range 2000-2025), identifying key trends, methodologies, challenges and future directions. The findings emphasize stereotype detection as a potential early-monitoring tool to prevent bias escalation and the rise of hate speech. Conclusions highlight the need for a broader, multilingual, and intersectional approach in NLP studies.
Similar Papers
StereoDetect: Detecting Stereotypes and Anti-stereotypes the Correct Way Using Social Psychological Underpinnings
Computation and Language
Helps computers spot harmful stereotypes and biases.
Are Stereotypes Leading LLMs' Zero-Shot Stance Detection ?
Computation and Language
Helps computers judge opinions fairly.
Stereotype Detection as a Catalyst for Enhanced Bias Detection: A Multi-Task Learning Approach
Computation and Language
Makes AI fairer by understanding bias and stereotypes.