Mitigating Self-Preference by Authorship Obfuscation
By: Taslim Mahbub, Shi Feng
Potential Business Impact:
Makes AI judges fairer by hiding who wrote what.
Language models (LMs) judges are widely used to evaluate the quality of LM outputs. Despite many advantages, LM judges display concerning biases that can impair their integrity in evaluations. One such bias is self-preference: LM judges preferring their own answers over those produced by other LMs or humans. The bias is hard to eliminate as frontier LM judges can distinguish their own outputs from those of others, even when the evaluation candidates are not labeled with their sources. In this paper, we investigate strategies to mitigate self-preference by reducing the LM judges' ability to recognize their own outputs. We apply black-box perturbations to evaluation candidates in pairwise comparison to obfuscate the authorship and reduce self-recognition. We find that perturbations as simple as synonym replacement for a few words predictably reduce self-preference. However, we also uncover fundamental challenges to eliminating the bias: when we extrapolate our perturbations to a more complete neutralization of stylistic differences between the evaluation candidates, self-preference recovers. Our findings suggest that self-recognition and self-preference can happen on many semantic levels, and complete mitigation remains challenging despite promising initial results.
Similar Papers
Do LLM Evaluators Prefer Themselves for a Reason?
Computation and Language
Helps computers judge their own answers fairly.
Breaking the Mirror: Activation-Based Mitigation of Self-Preference in LLM Evaluators
Computation and Language
Fixes AI judging its own answers unfairly.
Play Favorites: A Statistical Method to Measure Self-Bias in LLM-as-a-Judge
Computation and Language
Finds when AI unfairly favors its own answers.