Rating Roulette: Self-Inconsistency in LLM-As-A-Judge Frameworks
By: Rajarshi Haldar, Julia Hockenmaier
Potential Business Impact:
Makes AI judges for writing more trustworthy.
As Natural Language Generation (NLG) continues to be widely adopted, properly assessing it has become quite difficult. Lately, using large language models (LLMs) for evaluating these generations has gained traction, as they tend to align more closely with human preferences than conventional n-gram or embedding-based metrics. In our experiments, we show that LLM judges have low intra-rater reliability in their assigned scores across different runs. This variance makes their ratings inconsistent, almost arbitrary in the worst case, making it difficult to measure how good their judgments actually are. We quantify this inconsistency across different NLG tasks and benchmarks and see if judicious use of LLM judges can still be useful following proper guidelines.
Similar Papers
Neither Valid nor Reliable? Investigating the Use of LLMs as Judges
Computation and Language
Makes AI judges for writing less trustworthy.
Neither Valid nor Reliable? Investigating the Use of LLMs as Judges
Computation and Language
Makes AI judges for writing less trustworthy.
TrustJudge: Inconsistencies of LLM-as-a-Judge and How to Alleviate Them
Artificial Intelligence
Makes AI judges more fair and accurate.