CiteEval: Principle-Driven Citation Evaluation for Source Attribution
By: Yumo Xu , Peng Qi , Jifan Chen and more
Potential Business Impact:
Helps computers judge if sources truly support claims.
Citation quality is crucial in information-seeking systems, directly influencing trust and the effectiveness of information access. Current evaluation frameworks, both human and automatic, mainly rely on Natural Language Inference (NLI) to assess binary or ternary supportiveness from cited sources, which we argue is a suboptimal proxy for citation evaluation. In this work we introduce CiteEval, a citation evaluation framework driven by principles focusing on fine-grained citation assessment within a broad context, encompassing not only the cited sources but the full retrieval context, user query, and generated text. Guided by the proposed framework, we construct CiteBench, a multi-domain benchmark with high-quality human annotations on citation quality. To enable efficient evaluation, we further develop CiteEval-Auto, a suite of model-based metrics that exhibit strong correlation with human judgments. Experiments across diverse systems demonstrate CiteEval-Auto's superior ability to capture the multifaceted nature of citations compared to existing metrics, offering a principled and scalable approach to evaluate and improve model-generated citations.
Similar Papers
SemanticCite: Citation Verification with AI-Powered Full-Text Analysis and Evidence-Based Reasoning
Computation and Language
Checks if research papers correctly mention their sources.
HypoEval: Hypothesis-Guided Evaluation for Natural Language Generation
Computation and Language
Helps computers judge writing better with less help.
Automatic Evaluation Metrics for Artificially Generated Scientific Research
Computers and Society
Helps check AI science papers faster.