Score: 0

Robust Persona-Aware Toxicity Detection with Prompt Optimization and Learned Ensembling

Published: January 5, 2026 | arXiv ID: 2601.02337v1

By: Berk Atil, Rebecca J. Passonneau, Ninareh Mehrabi

Potential Business Impact:

Helps computers judge bad words better for everyone.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Toxicity detection is inherently subjective, shaped by the diverse perspectives and social priors of different demographic groups. While ``pluralistic'' modeling as used in economics and the social sciences aims to capture perspective differences across contexts, current Large Language Model (LLM) prompting techniques have different results across different personas and base models. In this work, we conduct a systematic evaluation of persona-aware toxicity detection, showing that no single prompting method, including our proposed automated prompt optimization strategy, uniformly dominates across all model-persona pairs. To exploit complementary errors, we explore ensembling four prompting variants and propose a lightweight meta-ensemble: an SVM over the 4-bit vector of prompt predictions. Our results demonstrate that the proposed SVM ensemble consistently outperforms individual prompting methods and traditional majority-voting techniques, achieving the strongest overall performance across diverse personas. This work provides one of the first systematic comparisons of persona-conditioned prompting for toxicity detection and offers a robust method for pluralistic evaluation in subjective NLP tasks.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
16 pages

Category
Computer Science:
Computation and Language