BiasGym: Fantastic Biases and How to Find (and Remove) Them
By: Sekh Mainul Islam , Nadav Borenstein , Siddhesh Milind Pawar and more
Potential Business Impact:
Finds and fixes unfair ideas in AI.
Understanding biases and stereotypes encoded in the weights of Large Language Models (LLMs) is crucial for developing effective mitigation strategies. Biased behaviour is often subtle and non-trivial to isolate, even when deliberately elicited, making systematic analysis and debiasing particularly challenging. To address this, we introduce BiasGym, a simple, cost-effective, and generalizable framework for reliably injecting, analyzing, and mitigating conceptual associations within LLMs. BiasGym consists of two components: BiasInject, which injects specific biases into the model via token-based fine-tuning while keeping the model frozen, and BiasScope, which leverages these injected signals to identify and steer the components responsible for biased behavior. Our method enables consistent bias elicitation for mechanistic analysis, supports targeted debiasing without degrading performance on downstream tasks, and generalizes to biases unseen during training. We demonstrate the effectiveness of BiasGym in reducing real-world stereotypes (e.g., people from a country being `reckless drivers') and in probing fictional associations (e.g., people from a country having `blue skin'), showing its utility for both safety interventions and interpretability research.
Similar Papers
BiasGym: Fantastic LLM Biases and How to Find (and Remove) Them
Computation and Language
Finds and fixes unfair ideas in AI.
A Comprehensive Study of Implicit and Explicit Biases in Large Language Models
Machine Learning (CS)
Finds and fixes unfairness in AI writing.
Breaking the Benchmark: Revealing LLM Bias via Minimal Contextual Augmentation
Computation and Language
Makes AI less likely to be unfair or biased.