Score: 0

Rating Roulette: Self-Inconsistency in LLM-As-A-Judge Frameworks

Published: October 31, 2025 | arXiv ID: 2510.27106v1

By: Rajarshi Haldar, Julia Hockenmaier

Potential Business Impact:

Makes AI judges for writing more trustworthy.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

As Natural Language Generation (NLG) continues to be widely adopted, properly assessing it has become quite difficult. Lately, using large language models (LLMs) for evaluating these generations has gained traction, as they tend to align more closely with human preferences than conventional n-gram or embedding-based metrics. In our experiments, we show that LLM judges have low intra-rater reliability in their assigned scores across different runs. This variance makes their ratings inconsistent, almost arbitrary in the worst case, making it difficult to measure how good their judgments actually are. We quantify this inconsistency across different NLG tasks and benchmarks and see if judicious use of LLM judges can still be useful following proper guidelines.

Country of Origin
🇺🇸 United States

Page Count
19 pages

Category
Computer Science:
Computation and Language