Assessing GPT Performance in a Proof-Based University-Level Course Under Blind Grading
By: Ming Ding , Rasmus Kyng , Federico Solda and more
Potential Business Impact:
AI helps students solve hard computer problems better.
As large language models (LLMs) advance, their role in higher education, particularly in free-response problem-solving, requires careful examination. This study assesses the performance of GPT-4o and o1-preview under realistic educational conditions in an undergraduate algorithms course. Anonymous GPT-generated solutions to take-home exams were graded by teaching assistants unaware of their origin. Our analysis examines both coarse-grained performance (scores) and fine-grained reasoning quality (error patterns). Results show that GPT-4o consistently struggles, failing to reach the passing threshold, while o1-preview performs significantly better, surpassing the passing score and even exceeding the student median in certain exercises. However, both models exhibit issues with unjustified claims and misleading arguments. These findings highlight the need for robust assessment strategies and AI-aware grading policies in education.
Similar Papers
Evaluating GPT- and Reasoning-based Large Language Models on Physics Olympiad Problems: Surpassing Human Performance and Implications for Educational Assessment
Physics Education
AI solves physics problems better than students.
Enhancing Large Language Models for Automated Homework Assessment in Undergraduate Circuit Analysis
Computers and Society
Helps AI grade student homework much better.
LLM-as-a-Grader: Practical Insights from Large Language Model for Short-Answer and Report Evaluation
Computation and Language
Computer grades student work like a teacher.