Score: 0

Personalized and Constructive Feedback for Computer Science Students Using the Large Language Model (LLM)

Published: October 13, 2025 | arXiv ID: 2510.11556v1

By: Javed Ali Khan , Muhammad Yaqoob , Mamoona Tasadduq and more

Potential Business Impact:

Gives students personalized feedback to learn better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The evolving pedagogy paradigms are leading toward educational transformations. One fundamental aspect of effective learning is relevant, immediate, and constructive feedback to students. Providing constructive feedback to large cohorts in academia is an ongoing challenge. Therefore, academics are moving towards automated assessment to provide immediate feedback. However, current approaches are often limited in scope, offering simplistic responses that do not provide students with personalized feedback to guide them toward improvements. This paper addresses this limitation by investigating the performance of Large Language Models (LLMs) in processing students assessments with predefined rubrics and marking criteria to generate personalized feedback for in-depth learning. We aim to leverage the power of existing LLMs for Marking Assessments, Tracking, and Evaluation (LLM-MATE) with personalized feedback to enhance students learning. To evaluate the performance of LLM-MATE, we consider the Software Architecture (SA) module as a case study. The LLM-MATE approach can help module leaders overcome assessment challenges with large cohorts. Also, it helps students improve their learning by obtaining personalized feedback in a timely manner. Additionally, the proposed approach will facilitate the establishment of ground truth for automating the generation of students assessment feedback using the ChatGPT API, thereby reducing the overhead associated with large cohort assessments.

Page Count
13 pages

Category
Computer Science:
Computers and Society