Activation Steering for Bias Mitigation: An Interpretable Approach to Safer LLMs
By: Shivam Dubey
Potential Business Impact:
Fixes AI to stop saying unfair or wrong things.
As large language models (LLMs) become more integrated into societal systems, the risk of them perpetuating and amplifying harmful biases becomes a critical safety concern. Traditional methods for mitigating bias often rely on data filtering or post-hoc output moderation, which treat the model as an opaque black box. In this work, we introduce a complete, end-to-end system that uses techniques from mechanistic interpretability to both identify and actively mitigate bias directly within a model's internal workings. Our method involves two primary stages. First, we train linear "probes" on the internal activations of a model to detect the latent representations of various biases (e.g., gender, race, age). Our experiments on \texttt{gpt2-large} demonstrate that these probes can identify biased content with near-perfect accuracy, revealing that bias representations become most salient in the model's later layers. Second, we leverage these findings to compute "steering vectors" by contrasting the model's activation patterns for biased and neutral statements. By adding these vectors during inference, we can actively steer the model's generative process away from producing harmful, stereotypical, or biased content in real-time. We demonstrate the efficacy of this activation steering technique, showing that it successfully alters biased completions toward more neutral alternatives. We present our work as a robust and reproducible system that offers a more direct and interpretable approach to building safer and more accountable LLMs.
Similar Papers
Steering Towards Fairness: Mitigating Political Bias in LLMs
Computation and Language
Makes AI less biased about politics.
Steering Towards Fairness: Mitigating Political Bias in LLMs
Computation and Language
Makes AI less biased in its opinions.
Shifting Perspectives: Steering Vectors for Robust Bias Mitigation in LLMs
Machine Learning (CS)
Makes AI fairer by reducing unfair ideas.