Robust Persona-Aware Toxicity Detection with Prompt Optimization and Learned Ensembling
By: Berk Atil, Rebecca J. Passonneau, Ninareh Mehrabi
Potential Business Impact:
Helps computers judge bad words better for everyone.
Toxicity detection is inherently subjective, shaped by the diverse perspectives and social priors of different demographic groups. While ``pluralistic'' modeling as used in economics and the social sciences aims to capture perspective differences across contexts, current Large Language Model (LLM) prompting techniques have different results across different personas and base models. In this work, we conduct a systematic evaluation of persona-aware toxicity detection, showing that no single prompting method, including our proposed automated prompt optimization strategy, uniformly dominates across all model-persona pairs. To exploit complementary errors, we explore ensembling four prompting variants and propose a lightweight meta-ensemble: an SVM over the 4-bit vector of prompt predictions. Our results demonstrate that the proposed SVM ensemble consistently outperforms individual prompting methods and traditional majority-voting techniques, achieving the strongest overall performance across diverse personas. This work provides one of the first systematic comparisons of persona-conditioned prompting for toxicity detection and offers a robust method for pluralistic evaluation in subjective NLP tasks.
Similar Papers
Evolving Prompts for Toxicity Search in Large Language Models
Neural and Evolutionary Computing
Finds ways to make AI say bad things.
How Toxic Can You Get? Search-based Toxicity Testing for Large Language Models
Software Engineering
Finds and fixes harmful words in AI.
Principled Personas: Defining and Measuring the Intended Effects of Persona Prompting on Task Performance
Computation and Language
Makes AI smarter by telling it who to be.