Algorithmic Fairness in NLP: Persona-Infused LLMs for Human-Centric Hate Speech Detection
By: Ewelina Gajewska , Arda Derbent , Jaroslaw A Chudziak and more
Potential Business Impact:
Makes AI better at spotting hate speech fairly.
In this paper, we investigate how personalising Large Language Models (Persona-LLMs) with annotator personas affects their sensitivity to hate speech, particularly regarding biases linked to shared or differing identities between annotators and targets. To this end, we employ Google's Gemini and OpenAI's GPT-4.1-mini models and two persona-prompting methods: shallow persona prompting and a deeply contextualised persona development based on Retrieval-Augmented Generation (RAG) to incorporate richer persona profiles. We analyse the impact of using in-group and out-group annotator personas on the models' detection performance and fairness across diverse social groups. This work bridges psychological insights on group identity with advanced NLP techniques, demonstrating that incorporating socio-demographic attributes into LLMs can address bias in automated hate speech detection. Our results highlight both the potential and limitations of persona-based approaches in reducing bias, offering valuable insights for developing more equitable hate speech detection systems.
Similar Papers
Hateful Person or Hateful Model? Investigating the Role of Personas in Hate Speech Detection by Large Language Models
Computation and Language
Makes AI less biased when judging mean words.
A Tale of Two Identities: An Ethical Audit of Human and AI-Crafted Personas
Computation and Language
AI creates fake people that sound like stereotypes.
Think Like a Person Before Responding: A Multi-Faceted Evaluation of Persona-Guided LLMs for Countering Hate
Computation and Language
Makes online hate speech less hurtful.