Breaking the Mirror: Activation-Based Mitigation of Self-Preference in LLM Evaluators
By: Dani Roytburg , Matthew Bozoukov , Matthew Nguyen and more
Potential Business Impact:
Fixes AI judging its own answers unfairly.
Large language models (LLMs) increasingly serve as automated evaluators, yet they suffer from "self-preference bias": a tendency to favor their own outputs over those of other models. This bias undermines fairness and reliability in evaluation pipelines, particularly for tasks like preference tuning and model routing. We investigate whether lightweight steering vectors can mitigate this problem at inference time without retraining. We introduce a curated dataset that distinguishes self-preference bias into justified examples of self-preference and unjustified examples of self-preference, and we construct steering vectors using two methods: Contrastive Activation Addition (CAA) and an optimization-based approach. Our results show that steering vectors can reduce unjustified self-preference bias by up to 97\%, substantially outperforming prompting and direct preference optimization baselines. Yet steering vectors are unstable on legitimate self-preference and unbiased agreement, implying self-preference spans multiple or nonlinear directions. This underscores both their promise and limits as safeguards for LLM-as-judges and motivates more robust interventions.
Similar Papers
Shifting Perspectives: Steering Vectors for Robust Bias Mitigation in LLMs
Machine Learning (CS)
Makes AI fairer by reducing unfair ideas.
Mitigating Self-Preference by Authorship Obfuscation
Computation and Language
Makes AI judges fairer by hiding who wrote what.
Do LLM Evaluators Prefer Themselves for a Reason?
Computation and Language
Helps computers judge their own answers fairly.