The Effects of Demographic Instructions on LLM Personas
By: Angel Felipe Magnossão de Paula , J. Shane Culpepper , Alistair Moffat and more
Potential Business Impact:
Helps social media spot sexism from different views.
Social media platforms must filter sexist content in compliance with governmental regulations. Current machine learning approaches can reliably detect sexism based on standardized definitions, but often neglect the subjective nature of sexist language and fail to consider individual users' perspectives. To address this gap, we adopt a perspectivist approach, retaining diverse annotations rather than enforcing gold-standard labels or their aggregations, allowing models to account for personal or group-specific views of sexism. Using demographic data from Twitter, we employ large language models (LLMs) to personalize the identification of sexism.
Similar Papers
Demographic Biases and Gaps in the Perception of Sexism in Large Language Models
Computation and Language
Finds sexism, but not everyone's view.
Evaluating LLMs for Demographic-Targeted Social Bias Detection: A Comprehensive Benchmark Study
Computation and Language
Finds unfairness in computer language training.
Evaluating LLMs for Demographic-Targeted Social Bias Detection: A Comprehensive Benchmark Study
Computation and Language
Finds unfairness in AI's words.