Aligned but Blind: Alignment Increases Implicit Bias by Reducing Awareness of Race
By: Lihao Sun , Chengzhi Mao , Valentin Hofmann and more
Potential Business Impact:
Makes AI less biased by teaching it about race.
Although value-aligned language models (LMs) appear unbiased in explicit bias evaluations, they often exhibit stereotypes in implicit word association tasks, raising concerns about their fair usage. We investigate the mechanisms behind this discrepancy and find that alignment surprisingly amplifies implicit bias in model outputs. Specifically, we show that aligned LMs, unlike their unaligned counterparts, overlook racial concepts in early internal representations when the context is ambiguous. Not representing race likely fails to activate safety guardrails, leading to unintended biases. Inspired by this insight, we propose a new bias mitigation strategy that works by incentivizing the representation of racial concepts in the early model layers. In contrast to conventional mitigation methods of machine unlearning, our interventions find that steering the model to be more aware of racial concepts effectively mitigates implicit bias. Similar to race blindness in humans, ignoring racial nuances can inadvertently perpetuate subtle biases in LMs.
Similar Papers
A Comprehensive Study of Implicit and Explicit Biases in Large Language Models
Machine Learning (CS)
Finds and fixes unfairness in AI writing.
Revealing the Intrinsic Ethical Vulnerability of Aligned Large Language Models
Computation and Language
AI can still be tricked into saying bad things.
Silenced Biases: The Dark Side LLMs Learned to Refuse
Computation and Language
Finds hidden unfairness in AI that safety rules hide.