Efficient Inference for Noisy LLM-as-a-Judge Evaluation
By: Yiqun T Chen , Sizhu Lu , Sijia Li and more
Potential Business Impact:
Makes AI judges more fair and accurate.
Large language models (LLMs) are increasingly used as automatic evaluators of generative AI outputs, a paradigm often referred to as "LLM-as-a-judge." In practice, LLM judges are imperfect predictions for the underlying truth and can exhibit systematic, non-random errors. Two main approaches have recently been proposed to address this issue: (i) direct measurementerror correction based on misclassification models such as Rogan-Gladen-style estimators, and (ii) surrogate-outcome approaches such as prediction-powered inference (PPI), which correct bias by calibrating prediction residuals on a small set of gold-standard human labels. In this paper, we systematically study the performance of these two approaches for estimating mean parameters (e.g., average benchmark scores or pairwise win rates). Leveraging tools from semiparametric efficiency theory, we unify the two classes of estimators by deriving explicit forms of efficient influence function (EIF)-based efficient estimators and characterize conditions under which PPI-style estimators attain strictly smaller asymptotic variance than measurement-error corrections. We verify our theoretical results in simulations and demonstrate the methods on real-data examples. We provide an implementation of the benchmarked methods and comparison utilities at https://github.com/yiqunchen/debias-llm-as-a-judge.
Similar Papers
How to Correctly Report LLM-as-a-Judge Evaluations
Machine Learning (CS)
Fixes computer judge mistakes for fairer tests.
Evaluating and Mitigating LLM-as-a-judge Bias in Communication Systems
Artificial Intelligence
Makes AI judges fairer and more trustworthy.
Demystifying LLM-as-a-Judge: Analytically Tractable Model for Inference-Time Scaling
Machine Learning (CS)
Makes AI better by trying more answers.