Tracing Stereotypes in Pre-trained Transformers: From Biased Neurons to Fairer Models
By: Gianmario Voria , Moses Openja , Foutse Khomh and more
Potential Business Impact:
Fixes AI bias by changing its "thinking" parts.
The advent of transformer-based language models has reshaped how AI systems process and generate text. In software engineering (SE), these models now support diverse activities, accelerating automation and decision-making. Yet, evidence shows that these models can reproduce or amplify social biases, raising fairness concerns. Recent work on neuron editing has shown that internal activations in pre-trained transformers can be traced and modified to alter model behavior. Building on the concept of knowledge neurons, neurons that encode factual information, we hypothesize the existence of biased neurons that capture stereotypical associations within pre-trained transformers. To test this hypothesis, we build a dataset of biased relations, i.e., triplets encoding stereotypes across nine bias types, and adapt neuron attribution strategies to trace and suppress biased neurons in BERT models. We then assess the impact of suppression on SE tasks. Our findings show that biased knowledge is localized within small neuron subsets, and suppressing them substantially reduces bias with minimal performance loss. This demonstrates that bias in transformers can be traced and mitigated at the neuron level, offering an interpretable approach to fairness in SE.
Similar Papers
Investigating Thinking Behaviours of Reasoning-Based Language Models for Social Bias Mitigation
Computation and Language
Fixes AI's thinking to stop unfair stereotypes.
BiasEdit: Debiasing Stereotyped Language Models via Model Editing
Computation and Language
Fixes computer language to be fair and unbiased.
Stereotype Detection as a Catalyst for Enhanced Bias Detection: A Multi-Task Learning Approach
Computation and Language
Makes AI fairer by understanding bias and stereotypes.