Score: 0

Breaking the Mirror: Activation-Based Mitigation of Self-Preference in LLM Evaluators

Published: September 3, 2025 | arXiv ID: 2509.03647v1

By: Dani Roytburg , Matthew Bozoukov , Matthew Nguyen and more

Potential Business Impact:

Fixes AI judging its own answers unfairly.

Business Areas:
A/B Testing Data and Analytics

Large language models (LLMs) increasingly serve as automated evaluators, yet they suffer from "self-preference bias": a tendency to favor their own outputs over those of other models. This bias undermines fairness and reliability in evaluation pipelines, particularly for tasks like preference tuning and model routing. We investigate whether lightweight steering vectors can mitigate this problem at inference time without retraining. We introduce a curated dataset that distinguishes self-preference bias into justified examples of self-preference and unjustified examples of self-preference, and we construct steering vectors using two methods: Contrastive Activation Addition (CAA) and an optimization-based approach. Our results show that steering vectors can reduce unjustified self-preference bias by up to 97\%, substantially outperforming prompting and direct preference optimization baselines. Yet steering vectors are unstable on legitimate self-preference and unbiased agreement, implying self-preference spans multiple or nonlinear directions. This underscores both their promise and limits as safeguards for LLM-as-judges and motivates more robust interventions.

Country of Origin
🇺🇸 United States

Page Count
13 pages

Category
Computer Science:
Computation and Language