Score: 1

How to Correctly Report LLM-as-a-Judge Evaluations

Published: November 26, 2025 | arXiv ID: 2511.21140v1

By: Chungpa Lee , Thomas Zeng , Jongwon Jeong and more

Potential Business Impact:

Fixes computer judge mistakes for fairer tests.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) are increasingly used as evaluators in lieu of humans. While scalable, their judgments are noisy due to imperfect specificity and sensitivity of LLMs, leading to biased accuracy estimates. Although bias-correction methods exist, they are underutilized in LLM research and typically assume exact knowledge of the model's specificity and sensitivity. Furthermore, in general we only have estimates of these values and it is not well known how to properly construct confidence intervals using only estimates. This work presents a simple plug-in framework that corrects such bias and constructs confidence intervals reflecting uncertainty from both test and calibration dataset, enabling practical and statistically sound LLM-based evaluation. Additionally, to reduce uncertainty in the accuracy estimate, we introduce an adaptive algorithm that efficiently allocates calibration sample sizes.

Repos / Data Links

Page Count
18 pages

Category
Computer Science:
Machine Learning (CS)