LCES: Zero-shot Automated Essay Scoring via Pairwise Comparisons Using Large Language Models
By: Takumi Shibata, Yuichi Miyamura
Potential Business Impact:
Helps computers grade essays more like humans.
Recent advances in large language models (LLMs) have enabled zero-shot automated essay scoring (AES), providing a promising way to reduce the cost and effort of essay scoring in comparison with manual grading. However, most existing zero-shot approaches rely on LLMs to directly generate absolute scores, which often diverge from human evaluations owing to model biases and inconsistent scoring. To address these limitations, we propose LLM-based Comparative Essay Scoring (LCES), a method that formulates AES as a pairwise comparison task. Specifically, we instruct LLMs to judge which of two essays is better, collect many such comparisons, and convert them into continuous scores. Considering that the number of possible comparisons grows quadratically with the number of essays, we improve scalability by employing RankNet to efficiently transform LLM preferences into scalar scores. Experiments using AES benchmark datasets show that LCES outperforms conventional zero-shot methods in accuracy while maintaining computational efficiency. Moreover, LCES is robust across different LLM backbones, highlighting its applicability to real-world zero-shot AES.
Similar Papers
EssayJudge: A Multi-Granular Benchmark for Assessing Automated Essay Scoring Capabilities of Multimodal Large Language Models
Computation and Language
Helps computers grade essays better, even with pictures.
Improve LLM-based Automatic Essay Scoring with Linguistic Features
Computation and Language
Helps computers grade essays better and faster.
Beyond the Score: Uncertainty-Calibrated LLMs for Automated Essay Assessment
Computation and Language
Helps computers grade essays with confidence.