Automated Essay Scoring Incorporating Annotations from Automated Feedback Systems
By: Christopher Ormerod
Potential Business Impact:
Makes essay grading smarter by finding mistakes.
This study illustrates how incorporating feedback-oriented annotations into the scoring pipeline can enhance the accuracy of automated essay scoring (AES). This approach is demonstrated with the Persuasive Essays for Rating, Selecting, and Understanding Argumentative and Discourse Elements (PERSUADE) corpus. We integrate two types of feedback-driven annotations: those that identify spelling and grammatical errors, and those that highlight argumentative components. To illustrate how this method could be applied in real-world scenarios, we employ two LLMs to generate annotations -- a generative language model used for spell correction and an encoder-based token-classifier trained to identify and mark argumentative elements. By incorporating annotations into the scoring process, we demonstrate improvements in performance using encoder-based large language models fine-tuned as classifiers.
Similar Papers
Enhancing Arabic Automated Essay Scoring with Synthetic Data and Error Injection
Computation and Language
Teaches computers to grade Arabic essays better.
Improve LLM-based Automatic Essay Scoring with Linguistic Features
Computation and Language
Helps computers grade essays better and faster.
EssayJudge: A Multi-Granular Benchmark for Assessing Automated Essay Scoring Capabilities of Multimodal Large Language Models
Computation and Language
Helps computers grade essays better, even with pictures.