Automated Grading of Students' Handwritten Graphs: A Comparison of Meta-Learning and Vision-Large Language Models
By: Behnam Parsaeifard, Martin Hlosta, Per Bergamin
Potential Business Impact:
Grades student math drawings automatically.
With the rise of online learning, the demand for efficient and consistent assessment in mathematics has significantly increased over the past decade. Machine Learning (ML), particularly Natural Language Processing (NLP), has been widely used for autograding student responses, particularly those involving text and/or mathematical expressions. However, there has been limited research on autograding responses involving students' handwritten graphs, despite their prevalence in Science, Technology, Engineering, and Mathematics (STEM) curricula. In this study, we implement multimodal meta-learning models for autograding images containing students' handwritten graphs and text. We further compare the performance of Vision Large Language Models (VLLMs) with these specially trained metalearning models. Our results, evaluated on a real-world dataset collected from our institution, show that the best-performing meta-learning models outperform VLLMs in 2-way classification tasks. In contrast, in more complex 3-way classification tasks, the best-performing VLLMs slightly outperform the meta-learning models. While VLLMs show promising results, their reliability and practical applicability remain uncertain and require further investigation.
Similar Papers
Evaluating Vision-Language and Large Language Models for Automated Student Assessment in Indonesian Classrooms
Computation and Language
Helps grade student tests and give feedback.
Seeing the Big Picture: Evaluating Multimodal LLMs' Ability to Interpret and Grade Handwritten Student Work
CV and Pattern Recognition
Helps computers grade math homework by looking.
Grading Handwritten Engineering Exams with Multimodal Large Language Models
CV and Pattern Recognition
Grades handwritten science tests automatically and accurately.