How to Correctly Report LLM-as-a-Judge Evaluations
By: Chungpa Lee , Thomas Zeng , Jongwon Jeong and more
Potential Business Impact:
Fixes computer judge mistakes for fairer tests.
Large language models (LLMs) are increasingly used as evaluators in lieu of humans. While scalable, their judgments are noisy due to imperfect specificity and sensitivity of LLMs, leading to biased accuracy estimates. Although bias-correction methods exist, they are underutilized in LLM research and typically assume exact knowledge of the model's specificity and sensitivity. Furthermore, in general we only have estimates of these values and it is not well known how to properly construct confidence intervals using only estimates. This work presents a simple plug-in framework that corrects such bias and constructs confidence intervals reflecting uncertainty from both test and calibration dataset, enabling practical and statistically sound LLM-based evaluation. Additionally, to reduce uncertainty in the accuracy estimate, we introduce an adaptive algorithm that efficiently allocates calibration sample sizes.
Similar Papers
Overconfidence in LLM-as-a-Judge: Diagnosis and Confidence-Driven Solution
Artificial Intelligence
Makes AI judges more honest about what they know.
Evaluating and Mitigating LLM-as-a-judge Bias in Communication Systems
Artificial Intelligence
Makes AI judges fairer and more trustworthy.
Overconfidence in LLM-as-a-Judge: Diagnosis and Confidence-Driven Solution
Artificial Intelligence
Makes AI judges more honest about what they know.