Grading Scale Impact on LLM-as-a-Judge: Human-LLM Alignment Is Highest on 0-5 Grading Scale
By: Weiyue Li , Minda Zhao , Weixuan Dong and more
Potential Business Impact:
Makes AI judges more fair and consistent.
Large language models (LLMs) are increasingly used as automated evaluators, yet prior works demonstrate that these LLM judges often lack consistency in scoring when the prompt is altered. However, the effect of the grading scale itself remains underexplored. We study the LLM-as-a-judge problem by comparing two kinds of raters: humans and LLMs. We collect ratings from both groups on three scales and across six benchmarks that include objective, open-ended subjective, and mixed tasks. Using intraclass correlation coefficients (ICC) to measure absolute agreement, we find that LLM judgments are not perfectly consistent across scales on subjective benchmarks, and that the choice of scale substantially shifts human-LLM agreement, even when within-group panel reliability is high. Aggregated over tasks, the grading scale of 0-5 yields the strongest human-LLM alignment. We further demonstrate that pooled reliability can mask benchmark heterogeneity and reveal systematic subgroup differences in alignment across gender groups, strengthening the importance of scale design and sub-level diagnostics as essential components of LLM-as-a-judge protocols.
Similar Papers
On Evaluating LLM Alignment by Evaluating LLMs as Judges
Computation and Language
Tests if AI follows your wishes without reading its answers.
Aligning Black-box Language Models with Human Judgments
Computation and Language
Makes AI judges agree with people better.
Are We on the Right Way to Assessing LLM-as-a-Judge?
Computation and Language
Checks if AI judges are fair and honest.