Developing and Evaluating a Large Language Model-Based Automated Feedback System Grounded in Evidence-Centered Design for Supporting Physics Problem Solving
By: Holger Maus , Paul Tschisgale , Fabian Kieser and more
Potential Business Impact:
AI helps students learn physics, but makes mistakes.
Generative AI offers new opportunities for individualized and adaptive learning, particularly through large language model (LLM)-based feedback systems. While LLMs can produce effective feedback for relatively straightforward conceptual tasks, delivering high-quality feedback for tasks that require advanced domain expertise, such as physics problem solving, remains a substantial challenge. This study presents the design of an LLM-based feedback system for physics problem solving grounded in evidence-centered design (ECD) and evaluates its performance within the German Physics Olympiad. Participants assessed the usefulness and accuracy of the generated feedback, which was generally perceived as useful and highly accurate. However, an in-depth analysis revealed that the feedback contained factual errors in 20% of cases; errors that often went unnoticed by the students. We discuss the risks associated with uncritical reliance on LLM-based feedback systems and outline potential directions for generating more adaptive and reliable LLM-based feedback in the future.
Similar Papers
Personalized Auto-Grading and Feedback System for Constructive Geometry Tasks Using Large Language Models on an Online Math Platform
Computers and Society
Helps kids learn math by checking their drawings.
Personalized and Constructive Feedback for Computer Science Students Using the Large Language Model (LLM)
Computers and Society
Gives students personalized feedback to learn better.
Listening with Language Models: Using LLMs to Collect and Interpret Classroom Feedback
Computers and Society
AI chatbot helps teachers get better student feedback.