Score: 1

Tracing Stereotypes in Pre-trained Transformers: From Biased Neurons to Fairer Models

Published: January 9, 2026 | arXiv ID: 2601.05663v1

By: Gianmario Voria , Moses Openja , Foutse Khomh and more

Potential Business Impact:

Fixes AI bias by changing its "thinking" parts.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The advent of transformer-based language models has reshaped how AI systems process and generate text. In software engineering (SE), these models now support diverse activities, accelerating automation and decision-making. Yet, evidence shows that these models can reproduce or amplify social biases, raising fairness concerns. Recent work on neuron editing has shown that internal activations in pre-trained transformers can be traced and modified to alter model behavior. Building on the concept of knowledge neurons, neurons that encode factual information, we hypothesize the existence of biased neurons that capture stereotypical associations within pre-trained transformers. To test this hypothesis, we build a dataset of biased relations, i.e., triplets encoding stereotypes across nine bias types, and adapt neuron attribution strategies to trace and suppress biased neurons in BERT models. We then assess the impact of suppression on SE tasks. Our findings show that biased knowledge is localized within small neuron subsets, and suppressing them substantially reduces bias with minimal performance loss. This demonstrates that bias in transformers can be traced and mitigated at the neuron level, offering an interpretable approach to fairness in SE.

Country of Origin
🇮🇹 🇨🇦 Italy, Canada

Page Count
12 pages

Category
Computer Science:
Software Engineering