Score: 0

Neither Valid nor Reliable? Investigating the Use of LLMs as Judges

Published: August 25, 2025 | arXiv ID: 2508.18076v2

By: Khaoula Chehbouni , Mohammed Haddou , Jackie Chi Kit Cheung and more

Potential Business Impact:

Makes AI judges for writing less trustworthy.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Evaluating natural language generation (NLG) systems remains a core challenge of natural language processing (NLP), further complicated by the rise of large language models (LLMs) that aims to be general-purpose. Recently, large language models as judges (LLJs) have emerged as a promising alternative to traditional metrics, but their validity remains underexplored. This position paper argues that the current enthusiasm around LLJs may be premature, as their adoption has outpaced rigorous scrutiny of their reliability and validity as evaluators. Drawing on measurement theory from the social sciences, we identify and critically assess four core assumptions underlying the use of LLJs: their ability to act as proxies for human judgment, their capabilities as evaluators, their scalability, and their cost-effectiveness. We examine how each of these assumptions may be challenged by the inherent limitations of LLMs, LLJs, or current practices in NLG evaluation. To ground our analysis, we explore three applications of LLJs: text summarization, data annotation, and safety alignment. Finally, we highlight the need for more responsible evaluation practices in LLJs evaluation, to ensure that their growing role in the field supports, rather than undermines, progress in NLG.

Page Count
22 pages

Category
Computer Science:
Computation and Language