Enhancing LLM-Based Short Answer Grading with Retrieval-Augmented Generation
By: Yucheng Chu , Peng He , Hang Li and more
Potential Business Impact:
Helps computers grade science answers better.
Short answer assessment is a vital component of science education, allowing evaluation of students' complex three-dimensional understanding. Large language models (LLMs) that possess human-like ability in linguistic tasks are increasingly popular in assisting human graders to reduce their workload. However, LLMs' limitations in domain knowledge restrict their understanding in task-specific requirements and hinder their ability to achieve satisfactory performance. Retrieval-augmented generation (RAG) emerges as a promising solution by enabling LLMs to access relevant domain-specific knowledge during assessment. In this work, we propose an adaptive RAG framework for automated grading that dynamically retrieves and incorporates domain-specific knowledge based on the question and student answer context. Our approach combines semantic search and curated educational sources to retrieve valuable reference materials. Experimental results in a science education dataset demonstrate that our system achieves an improvement in grading accuracy compared to baseline LLM approaches. The findings suggest that RAG-enhanced grading systems can serve as reliable support with efficient performance gains.
Similar Papers
When Retrieval Succeeds and Fails: Rethinking Retrieval-Augmented Generation for LLMs
Computation and Language
Helps smart computers learn new things faster.
Retrieval Augmented Generation Evaluation in the Era of Large Language Models: A Comprehensive Survey
Computation and Language
Tests how AI uses outside facts to answer questions.
LLM-Independent Adaptive RAG: Let the Question Speak for Itself
Computation and Language
Smartly finds answers, saving computer power.