Score: 1

Balanced Accuracy: The Right Metric for Evaluating LLM Judges -- Explained through Youden's J statistic

Published: December 8, 2025 | arXiv ID: 2512.08121v1

By: Stephane Collot , Colin Fraser , Justin Zhao and more

BigTech Affiliations: Meta

Potential Business Impact:

Chooses best judge to compare AI models.

Business Areas:
A/B Testing Data and Analytics

Rigorous evaluation of large language models (LLMs) relies on comparing models by the prevalence of desirable or undesirable behaviors, such as task pass rates or policy violations. These prevalence estimates are produced by a classifier, either an LLM-as-a-judge or human annotators, making the choice of classifier central to trustworthy evaluation. Common metrics used for this choice, such as Accuracy, Precision, and F1, are sensitive to class imbalance and to arbitrary choices of positive class, and can favor judges that distort prevalence estimates. We show that Youden's $J$ statistic is theoretically aligned with choosing the best judge to compare models, and that Balanced Accuracy is an equivalent linear transformation of $J$. Through both analytical arguments and empirical examples and simulations, we demonstrate how selecting judges using Balanced Accuracy leads to better, more robust classifier selection.

Country of Origin
🇺🇸 United States

Page Count
9 pages

Category
Computer Science:
Machine Learning (CS)