Order in the Evaluation Court: A Critical Analysis of NLG Evaluation Trends
By: Jing Yang , Nils Feldhus , Salar Mohtaj and more
Despite advances in Natural Language Generation (NLG), evaluation remains challenging. Although various new metrics and LLM-as-a-judge (LaaJ) methods are proposed, human judgment persists as the gold standard. To systematically review how NLG evaluation has evolved, we employ an automatic information extraction scheme to gather key information from NLG papers, focusing on different evaluation methods (metrics, LaaJ and human evaluation). With extracted metadata from 14,171 papers across four major conferences (ACL, EMNLP, NAACL, and INLG) over the past six years, we reveal several critical findings: (1) Task Divergence: While Dialogue Generation demonstrates a rapid shift toward LaaJ (>40% in 2025), Machine Translation remains locked into n-gram metrics, and Question Answering exhibits a substantial decline in the proportion of studies conducting human evaluation. (2) Metric Inertia: Despite the development of semantic metrics, general-purpose metrics (e.g., BLEU, ROUGE) continue to be widely used across tasks without empirical justification, often lacking the discriminative power to distinguish between specific quality criteria. (3) Human-LaaJ Divergence: Our association analysis challenges the assumption that LLMs act as mere proxies for humans; LaaJ and human evaluations prioritize very different signals, and explicit validation is scarce (<8% of papers comparing the two), with only moderate to low correlation. Based on these observations, we derive practical recommendations to improve the rigor of future NLG evaluation.
Similar Papers
The illusion of a perfect metric: Why evaluating AI's words is harder than it looks
Computation and Language
Helps AI write better by checking its work.
Neither Valid nor Reliable? Investigating the Use of LLMs as Judges
Computation and Language
Makes AI judges for writing less trustworthy.
Neither Valid nor Reliable? Investigating the Use of LLMs as Judges
Computation and Language
Makes AI judges for writing less trustworthy.