Navigating the Rabbit Hole: Emergent Biases in LLM-Generated Attack Narratives Targeting Mental Health Groups
By: Rijul Magu , Arka Dutta , Sean Kim and more
Potential Business Impact:
AI can unfairly target people with mental health issues.
Large Language Models (LLMs) have been shown to demonstrate imbalanced biases against certain groups. However, the study of unprovoked targeted attacks by LLMs towards at-risk populations remains underexplored. Our paper presents three novel contributions: (1) the explicit evaluation of LLM-generated attacks on highly vulnerable mental health groups; (2) a network-based framework to study the propagation of relative biases; and (3) an assessment of the relative degree of stigmatization that emerges from these attacks. Our analysis of a recently released large-scale bias audit dataset reveals that mental health entities occupy central positions within attack narrative networks, as revealed by a significantly higher mean centrality of closeness (p-value = 4.06e-10) and dense clustering (Gini coefficient = 0.7). Drawing from sociological foundations of stigmatization theory, our stigmatization analysis indicates increased labeling components for mental health disorder-related targets relative to initial targets in generation chains. Taken together, these insights shed light on the structural predilections of large language models to heighten harmful discourse and highlight the need for suitable approaches for mitigation.
Similar Papers
Large Language Models Develop Novel Social Biases Through Adaptive Exploration
Computers and Society
Computers can invent new unfairness, not just copy it.
Dissecting Bias in LLMs: A Mechanistic Interpretability Perspective
Computation and Language
Fixes computer "thinking" to be less unfair.
Investigating Gender Bias in LLM-Generated Stories via Psychological Stereotypes
Computation and Language
Finds how stories show gender bias.