LLM-as-a-Grader: Practical Insights from Large Language Model for Short-Answer and Report Evaluation
By: Grace Byun, Swati Rajwal, Jinho D. Choi
Potential Business Impact:
Computer grades student work like a teacher.
Large Language Models (LLMs) are increasingly explored for educational tasks such as grading, yet their alignment with human evaluation in real classrooms remains underexamined. In this study, we investigate the feasibility of using an LLM (GPT-4o) to evaluate short-answer quizzes and project reports in an undergraduate Computational Linguistics course. We collect responses from approximately 50 students across five quizzes and receive project reports from 14 teams. LLM-generated scores are compared against human evaluations conducted independently by the course teaching assistants (TAs). Our results show that GPT-4o achieves strong correlation with human graders (up to 0.98) and exact score agreement in 55\% of quiz cases. For project reports, it also shows strong overall alignment with human grading, while exhibiting some variability in scoring technical, open-ended responses. We release all code and sample data to support further research on LLMs in educational assessment. This work highlights both the potential and limitations of LLM-based grading systems and contributes to advancing automated grading in real-world academic settings.
Similar Papers
Assessing the Reliability and Validity of Large Language Models for Automated Assessment of Student Essays in Higher Education
Computers and Society
AI can't reliably grade essays yet.
Enhancing Large Language Models for Automated Homework Assessment in Undergraduate Circuit Analysis
Computers and Society
Helps AI grade student homework much better.
Benchmarking Large Language Models for Personalized Guidance in AI-Enhanced Learning
Artificial Intelligence
Helps AI tutors give better, personalized learning help.