Balanced Accuracy: The Right Metric for Evaluating LLM Judges -- Explained through Youden's J statistic
By: Stephane Collot , Colin Fraser , Justin Zhao and more
Potential Business Impact:
Chooses best judge to compare AI models.
Rigorous evaluation of large language models (LLMs) relies on comparing models by the prevalence of desirable or undesirable behaviors, such as task pass rates or policy violations. These prevalence estimates are produced by a classifier, either an LLM-as-a-judge or human annotators, making the choice of classifier central to trustworthy evaluation. Common metrics used for this choice, such as Accuracy, Precision, and F1, are sensitive to class imbalance and to arbitrary choices of positive class, and can favor judges that distort prevalence estimates. We show that Youden's $J$ statistic is theoretically aligned with choosing the best judge to compare models, and that Balanced Accuracy is an equivalent linear transformation of $J$. Through both analytical arguments and empirical examples and simulations, we demonstrate how selecting judges using Balanced Accuracy leads to better, more robust classifier selection.
Similar Papers
How to Correctly Report LLM-as-a-Judge Evaluations
Machine Learning (CS)
Fixes computer judge mistakes for fairer tests.
Overconfidence in LLM-as-a-Judge: Diagnosis and Confidence-Driven Solution
Artificial Intelligence
Makes AI judges more honest about what they know.
Overconfidence in LLM-as-a-Judge: Diagnosis and Confidence-Driven Solution
Artificial Intelligence
Makes AI judges more honest about what they know.