Score: 1

Evaluating Vision-Language and Large Language Models for Automated Student Assessment in Indonesian Classrooms

Published: June 5, 2025 | arXiv ID: 2506.04822v1

By: Nurul Aisyah , Muhammad Dehan Al Kautsar , Arif Hidayat and more

Potential Business Impact:

Helps grade student tests and give feedback.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Although vision-language and large language models (VLM and LLM) offer promising opportunities for AI-driven educational assessment, their effectiveness in real-world classroom settings, particularly in underrepresented educational contexts, remains underexplored. In this study, we evaluated the performance of a state-of-the-art VLM and several LLMs on 646 handwritten exam responses from grade 4 students in six Indonesian schools, covering two subjects: Mathematics and English. These sheets contain more than 14K student answers that span multiple choice, short answer, and essay questions. Assessment tasks include grading these responses and generating personalized feedback. Our findings show that the VLM often struggles to accurately recognize student handwriting, leading to error propagation in downstream LLM grading. Nevertheless, LLM-generated feedback retains some utility, even when derived from imperfect input, although limitations in personalization and contextual relevance persist.

Page Count
9 pages

Category
Computer Science:
Computation and Language