Automated Refinement of Essay Scoring Rubrics for Language Models via Reflect-and-Revise
By: Keno Harada , Lui Yoshida , Takeshi Kojima and more
Potential Business Impact:
Teaches computers to grade essays like humans.
The performance of Large Language Models (LLMs) is highly sensitive to the prompts they are given. Drawing inspiration from the field of prompt optimization, this study investigates the potential for enhancing Automated Essay Scoring (AES) by refining the scoring rubrics used by LLMs. Specifically, our approach prompts models to iteratively refine rubrics by reflecting on models' own scoring rationales and observed discrepancies with human scores on sample essays. Experiments on the TOEFL11 and ASAP datasets using GPT-4.1, Gemini-2.5-Pro, and Qwen-3-Next-80B-A3B-Instruct show Quadratic Weighted Kappa (QWK) improvements of up to 0.19 and 0.47, respectively. Notably, even with a simple initial rubric, our approach achieves comparable or better QWK than using detailed human-authored rubrics. Our findings highlight the importance of iterative rubric refinement in LLM-based AES to enhance alignment with human evaluations.
Similar Papers
Do We Need a Detailed Rubric for Automated Essay Scoring using Large Language Models?
Computation and Language
Makes AI grade essays better with fewer instructions.
Assessing the Reliability and Validity of Large Language Models for Automated Assessment of Student Essays in Higher Education
Computers and Society
AI can't reliably grade essays yet.
Agreement Between Large Language Models and Human Raters in Essay Scoring: A Research Synthesis
Computation and Language
Helps computers grade essays as well as people.