Large Language Models Develop Novel Social Biases Through Adaptive Exploration
By: Addison J. Wu , Ryan Liu , Xuechunzi Bai and more
Potential Business Impact:
Computers can invent new unfairness, not just copy it.
As large language models (LLMs) are adopted into frameworks that grant them the capacity to make real decisions, it is increasingly important to ensure that they are unbiased. In this paper, we argue that the predominant approach of simply removing existing biases from models is not enough. Using a paradigm from the psychology literature, we demonstrate that LLMs can spontaneously develop novel social biases about artificial demographic groups even when no inherent differences exist. These biases result in highly stratified task allocations, which are less fair than assignments by human participants and are exacerbated by newer and larger models. In social science, emergent biases like these have been shown to result from exploration-exploitation trade-offs, where the decision-maker explores too little, allowing early observations to strongly influence impressions about entire demographic groups. To alleviate this effect, we examine a series of interventions targeting model inputs, problem structure, and explicit steering. We find that explicitly incentivizing exploration most robustly reduces stratification, highlighting the need for better multifaceted objectives to mitigate bias. These results reveal that LLMs are not merely passive mirrors of human social biases, but can actively create new ones from experience, raising urgent questions about how these systems will shape societies over time.
Similar Papers
A Comprehensive Study of Implicit and Explicit Biases in Large Language Models
Machine Learning (CS)
Finds and fixes unfairness in AI writing.
Explicit vs. Implicit: Investigating Social Bias in Large Language Models through Self-Reflection
Computation and Language
Finds hidden unfairness in AI's words.
Getting out of the Big-Muddy: Escalation of Commitment in LLMs
Artificial Intelligence
Computers can get stuck on bad choices.