The Effects of Demographic Instructions on LLM Personas
By: Angel Felipe Magnossão de Paula , J. Shane Culpepper , Alistair Moffat and more
Potential Business Impact:
Helps social media spot sexism from different views.
Social media platforms must filter sexist content in compliance with governmental regulations. Current machine learning approaches can reliably detect sexism based on standardized definitions, but often neglect the subjective nature of sexist language and fail to consider individual users' perspectives. To address this gap, we adopt a perspectivist approach, retaining diverse annotations rather than enforcing gold-standard labels or their aggregations, allowing models to account for personal or group-specific views of sexism. Using demographic data from Twitter, we employ large language models (LLMs) to personalize the identification of sexism.
Similar Papers
Demographic Biases and Gaps in the Perception of Sexism in Large Language Models
Computation and Language
Finds sexism, but not everyone's view.
Beyond Demographics: Fine-tuning Large Language Models to Predict Individuals' Subjective Text Perceptions
Computation and Language
Computers don't understand people's backgrounds well.
Assessing the Reliability of LLMs Annotations in the Context of Demographic Bias and Model Explanation
Computation and Language
Helps computers judge sexism fairly, not by who wrote it.