Exploring the Utilities of the Rationales from Large Language Models to Enhance Automated Essay Scoring
By: Hong Jiao, Hanna Choi, Haowei Hua
Potential Business Impact:
Helps computers grade essays more accurately.
This study explored the utilities of rationales generated by GPT-4.1 and GPT-5 in automated scoring using Prompt 6 essays from the 2012 Kaggle ASAP data. Essay-based scoring was compared with rationale-based scoring. The study found in general essay-based scoring performed better than rationale-based scoring with higher Quadratic Weighted Kappa (QWK). However, rationale-based scoring led to higher scoring accuracy in terms of F1 scores for score 0 which had less representation due to class imbalance issues. The ensemble modeling of essay-based scoring models increased the scoring accuracy at both specific score levels and across all score levels. The ensemble modeling of essay-based scoring and each of the rationale-based scoring performed about the same. Further ensemble of essay-based scoring and both rationale-based scoring yielded the best scoring accuracy with QWK of 0.870 compared with 0.848 reported in literature.
Similar Papers
Automated Refinement of Essay Scoring Rubrics for Language Models via Reflect-and-Revise
Computation and Language
Teaches computers to grade essays like humans.
Exploration of Summarization by Generative Language Models for Automated Scoring of Long Essays
Computation and Language
Scores long essays better by summarizing them.
Rank-Then-Score: Enhancing Large Language Models for Automated Essay Scoring
Computation and Language
Helps computers grade essays better, especially in Chinese.