Score: 0

Agreement Between Large Language Models and Human Raters in Essay Scoring: A Research Synthesis

Published: December 16, 2025 | arXiv ID: 2512.14561v1

By: Hongli Li , Che Han Chen , Kevin Fan and more

Potential Business Impact:

Helps computers grade essays as well as people.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Despite the growing promise of large language models (LLMs) in automatic essay scoring (AES), empirical findings regarding their reliability compared to human raters remain mixed. Following the PRISMA 2020 guidelines, we synthesized 65 published and unpublished studies from January 2022 to August 2025 that examined agreement between LLMs and human raters in AES. Across studies, reported LLM-human agreement was generally moderate to good, with agreement indices (e.g., Quadratic Weighted Kappa, Pearson correlation, and Spearman's rho) mostly ranging between 0.30 and 0.80. Substantial variability in agreement levels was observed across studies, reflecting differences in study-specific factors as well as the lack of standardized reporting practices. Implications and directions for future research are discussed.

Country of Origin
🇺🇸 United States

Page Count
18 pages

Category
Computer Science:
Computation and Language