Leveraging Peer, Self, and Teacher Assessments for Generative AI-Enhanced Feedback
By: Alvaro Becerra, Ruth Cobos
Providing timely and meaningful feedback remains a persistent challenge in higher education, especially in large courses where teachers must balance formative depth with scalability. Recent advances in Generative Artificial Intelligence (GenAI) offer new opportunities to support feedback processes while maintaining human oversight. This paper presents an study conducted within the AICoFe (AI-based Collaborative Feedback) system, which integrates teacher, peer, and self-assessments of engineering students' oral presentations. Using a validated rubric, 46 evaluation sets were analyzed to examine agreement, correlation, and bias across evaluators. The analyses revealed consistent overall alignment among sources but also systematic variations in scoring behavior, reflecting distinct evaluative perspectives. These findings informed the proposal of an enhanced GenAI model within AICoFe system, designed to integrate human assessments through weighted input aggregation, bias detection, and context-aware feedback generation. The study contributes empirical evidence and design principles for developing GenAI-based feedback systems that combine data-based efficiency with pedagogical validity and transparency.
Similar Papers
Hybrid Instructor Ai Assessment In Academic Projects: Efficiency, Equity, And Methodological Lessons
Computers and Society
AI helps teachers grade student reports faster, better.
Writing With Machines and Peers: Designing for Critical Engagement with Generative AI
Computers and Society
Teaches students to use AI for better writing.
Experiences with Content Development and Assessment Design in the Era of GenAI
Computers and Society
AI helps teachers create better school lessons.