Enhancing Large Language Models for Automated Homework Assessment in Undergraduate Circuit Analysis
By: Liangliang Chen , Huiru Xie , Zhihao Qin and more
Potential Business Impact:
Helps AI grade student homework much better.
This research full paper presents an enhancement pipeline for large language models (LLMs) in assessing homework for an undergraduate circuit analysis course, aiming to improve LLMs' capacity to provide personalized support to electrical engineering students. Existing evaluations have demonstrated that GPT-4o possesses promising capabilities in assessing student homework in this domain. Building on these findings, we enhance GPT-4o's performance through multi-step prompting, contextual data augmentation, and the incorporation of targeted hints. These strategies effectively address common errors observed in GPT-4o's responses when using simple prompts, leading to a substantial improvement in assessment accuracy. Specifically, the correct response rate for GPT-4o increases from 74.71% to 97.70% after applying the enhanced prompting and augmented data on entry-level circuit analysis topics. This work lays a foundation for the effective integration of LLMs into circuit analysis instruction and, more broadly, into engineering education.
Similar Papers
Benchmarking Large Language Models on Homework Assessment in Circuit Analysis
Computers and Society
Helps computers grade student homework accurately.
LLM-as-a-Grader: Practical Insights from Large Language Model for Short-Answer and Report Evaluation
Computation and Language
Computer grades student work like a teacher.
Large Language Model-Driven Dynamic Assessment of Grammatical Accuracy in English Language Learner Writing
Computation and Language
Helps computers teach English grammar better.