Score: 0

Automated Refinement of Essay Scoring Rubrics for Language Models via Reflect-and-Revise

Published: October 10, 2025 | arXiv ID: 2510.09030v1

By: Keno Harada , Lui Yoshida , Takeshi Kojima and more

Potential Business Impact:

Teaches computers to grade essays like humans.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The performance of Large Language Models (LLMs) is highly sensitive to the prompts they are given. Drawing inspiration from the field of prompt optimization, this study investigates the potential for enhancing Automated Essay Scoring (AES) by refining the scoring rubrics used by LLMs. Specifically, our approach prompts models to iteratively refine rubrics by reflecting on models' own scoring rationales and observed discrepancies with human scores on sample essays. Experiments on the TOEFL11 and ASAP datasets using GPT-4.1, Gemini-2.5-Pro, and Qwen-3-Next-80B-A3B-Instruct show Quadratic Weighted Kappa (QWK) improvements of up to 0.19 and 0.47, respectively. Notably, even with a simple initial rubric, our approach achieves comparable or better QWK than using detailed human-authored rubrics. Our findings highlight the importance of iterative rubric refinement in LLM-based AES to enhance alignment with human evaluations.

Country of Origin
🇯🇵 Japan

Page Count
9 pages

Category
Computer Science:
Computation and Language