Evaluating Vision-Language and Large Language Models for Automated Student Assessment in Indonesian Classrooms
By: Nurul Aisyah , Muhammad Dehan Al Kautsar , Arif Hidayat and more
Potential Business Impact:
Helps grade student tests and give feedback.
Although vision-language and large language models (VLM and LLM) offer promising opportunities for AI-driven educational assessment, their effectiveness in real-world classroom settings, particularly in underrepresented educational contexts, remains underexplored. In this study, we evaluated the performance of a state-of-the-art VLM and several LLMs on 646 handwritten exam responses from grade 4 students in six Indonesian schools, covering two subjects: Mathematics and English. These sheets contain more than 14K student answers that span multiple choice, short answer, and essay questions. Assessment tasks include grading these responses and generating personalized feedback. Our findings show that the VLM often struggles to accurately recognize student handwriting, leading to error propagation in downstream LLM grading. Nevertheless, LLM-generated feedback retains some utility, even when derived from imperfect input, although limitations in personalization and contextual relevance persist.
Similar Papers
Automated Grading of Students' Handwritten Graphs: A Comparison of Meta-Learning and Vision-Large Language Models
Machine Learning (CS)
Grades student math drawings automatically.
VLM@school -- Evaluation of AI image understanding on German middle school knowledge
Artificial Intelligence
Tests AI's smarts using school lessons.
Systematic Evaluation of Large Vision-Language Models for Surgical Artificial Intelligence
CV and Pattern Recognition
AI helps doctors understand surgery better.