Score: 0

Enhancing Large Language Models for Automated Homework Assessment in Undergraduate Circuit Analysis

Published: November 22, 2025 | arXiv ID: 2511.18221v1

By: Liangliang Chen , Huiru Xie , Zhihao Qin and more

Potential Business Impact:

Helps AI grade student homework much better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This research full paper presents an enhancement pipeline for large language models (LLMs) in assessing homework for an undergraduate circuit analysis course, aiming to improve LLMs' capacity to provide personalized support to electrical engineering students. Existing evaluations have demonstrated that GPT-4o possesses promising capabilities in assessing student homework in this domain. Building on these findings, we enhance GPT-4o's performance through multi-step prompting, contextual data augmentation, and the incorporation of targeted hints. These strategies effectively address common errors observed in GPT-4o's responses when using simple prompts, leading to a substantial improvement in assessment accuracy. Specifically, the correct response rate for GPT-4o increases from 74.71% to 97.70% after applying the enhanced prompting and augmented data on entry-level circuit analysis topics. This work lays a foundation for the effective integration of LLMs into circuit analysis instruction and, more broadly, into engineering education.

Country of Origin
🇺🇸 United States

Page Count
9 pages

Category
Computer Science:
Computers and Society