Score: 0

Towards Human-Like Grading: A Unified LLM-Enhanced Framework for Subjective Question Evaluation

Published: October 9, 2025 | arXiv ID: 2510.07912v1

By: Fanwei Zhua , Jiaxuan He , Xiaoxiao Chen and more

Potential Business Impact:

Grades all kinds of hard homework automatically.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Automatic grading of subjective questions remains a significant challenge in examination assessment due to the diversity in question formats and the open-ended nature of student responses. Existing works primarily focus on a specific type of subjective question and lack the generality to support comprehensive exams that contain diverse question types. In this paper, we propose a unified Large Language Model (LLM)-enhanced auto-grading framework that provides human-like evaluation for all types of subjective questions across various domains. Our framework integrates four complementary modules to holistically evaluate student answers. In addition to a basic text matching module that provides a foundational assessment of content similarity, we leverage the powerful reasoning and generative capabilities of LLMs to: (1) compare key knowledge points extracted from both student and reference answers, (2) generate a pseudo-question from the student answer to assess its relevance to the original question, and (3) simulate human evaluation by identifying content-related and non-content strengths and weaknesses. Extensive experiments on both general-purpose and domain-specific datasets show that our framework consistently outperforms traditional and LLM-based baselines across multiple grading metrics. Moreover, the proposed system has been successfully deployed in real-world training and certification exams at a major e-commerce enterprise.

Page Count
8 pages

Category
Computer Science:
Computation and Language