Score: 0

Enhancing LLM-Based Short Answer Grading with Retrieval-Augmented Generation

Published: April 7, 2025 | arXiv ID: 2504.05276v2

By: Yucheng Chu , Peng He , Hang Li and more

Potential Business Impact:

Helps computers grade science answers better.

Business Areas:
Semantic Search Internet Services

Short answer assessment is a vital component of science education, allowing evaluation of students' complex three-dimensional understanding. Large language models (LLMs) that possess human-like ability in linguistic tasks are increasingly popular in assisting human graders to reduce their workload. However, LLMs' limitations in domain knowledge restrict their understanding in task-specific requirements and hinder their ability to achieve satisfactory performance. Retrieval-augmented generation (RAG) emerges as a promising solution by enabling LLMs to access relevant domain-specific knowledge during assessment. In this work, we propose an adaptive RAG framework for automated grading that dynamically retrieves and incorporates domain-specific knowledge based on the question and student answer context. Our approach combines semantic search and curated educational sources to retrieve valuable reference materials. Experimental results in a science education dataset demonstrate that our system achieves an improvement in grading accuracy compared to baseline LLM approaches. The findings suggest that RAG-enhanced grading systems can serve as reliable support with efficient performance gains.

Country of Origin
🇺🇸 United States

Page Count
7 pages

Category
Computer Science:
Computation and Language