ErrEval: Error-Aware Evaluation for Question Generation through Explicit Diagnostics
By: Weiping Fu , Bifan Wei , Jingyi Hao and more
Automatic Question Generation (QG) often produces outputs with critical defects, such as factual hallucinations and answer mismatches. However, existing evaluation methods, including LLM-based evaluators, mainly adopt a black-box and holistic paradigm without explicit error modeling, leading to the neglect of such defects and overestimation of question quality. To address this issue, we propose ErrEval, a flexible and Error-aware Evaluation framework that enhances QG evaluation through explicit error diagnostics. Specifically, ErrEval reformulates evaluation as a two-stage process of error diagnosis followed by informed scoring. At the first stage, a lightweight plug-and-play Error Identifier detects and categorizes common errors across structural, linguistic, and content-related aspects. These diagnostic signals are then incorporated as explicit evidence to guide LLM evaluators toward more fine-grained and grounded judgments. Extensive experiments on three benchmarks demonstrate the effectiveness of ErrEval, showing that incorporating explicit diagnostics improves alignment with human judgments. Further analyses confirm that ErrEval effectively mitigates the overestimation of low-quality questions.
Similar Papers
Retrieval-Augmented Guardrails for AI-Drafted Patient-Portal Messages: Error Taxonomy Construction and Large-Scale Evaluation
Computation and Language
Checks AI messages to doctors for mistakes.
Error-Driven Prompt Optimization for Arithmetic Reasoning
Artificial Intelligence
Makes AI do math safely on private data.
HypoEval: Hypothesis-Guided Evaluation for Natural Language Generation
Computation and Language
Helps computers judge writing better with less help.