Score: 0

The Impact of Annotator Personas on LLM Behavior Across the Perspectivism Spectrum

Published: August 23, 2025 | arXiv ID: 2508.17164v1

By: Olufunke O. Sarumi , Charles Welch , Daniel Braun and more

Potential Business Impact:

Helps computers judge online hate speech fairly.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

In this work, we explore the capability of Large Language Models (LLMs) to annotate hate speech and abusiveness while considering predefined annotator personas within the strong-to-weak data perspectivism spectra. We evaluated LLM-generated annotations against existing annotator modeling techniques for perspective modeling. Our findings show that LLMs selectively use demographic attributes from the personas. We identified prototypical annotators, with persona features that show varying degrees of alignment with the original human annotators. Within the data perspectivism paradigm, annotator modeling techniques that do not explicitly rely on annotator information performed better under weak data perspectivism compared to both strong data perspectivism and human annotations, suggesting LLM-generated views tend towards aggregation despite subjective prompting. However, for more personalized datasets tailored to strong perspectivism, the performance of LLM annotator modeling approached, but did not exceed, human annotators.

Country of Origin
🇩🇪 Germany

Page Count
16 pages

Category
Computer Science:
Computation and Language