Score: 1

CiteEval: Principle-Driven Citation Evaluation for Source Attribution

Published: June 2, 2025 | arXiv ID: 2506.01829v1

By: Yumo Xu , Peng Qi , Jifan Chen and more

BigTech Affiliations: Amazon Google

Potential Business Impact:

Helps computers judge if sources truly support claims.

Business Areas:
Semantic Search Internet Services

Citation quality is crucial in information-seeking systems, directly influencing trust and the effectiveness of information access. Current evaluation frameworks, both human and automatic, mainly rely on Natural Language Inference (NLI) to assess binary or ternary supportiveness from cited sources, which we argue is a suboptimal proxy for citation evaluation. In this work we introduce CiteEval, a citation evaluation framework driven by principles focusing on fine-grained citation assessment within a broad context, encompassing not only the cited sources but the full retrieval context, user query, and generated text. Guided by the proposed framework, we construct CiteBench, a multi-domain benchmark with high-quality human annotations on citation quality. To enable efficient evaluation, we further develop CiteEval-Auto, a suite of model-based metrics that exhibit strong correlation with human judgments. Experiments across diverse systems demonstrate CiteEval-Auto's superior ability to capture the multifaceted nature of citations compared to existing metrics, offering a principled and scalable approach to evaluate and improve model-generated citations.

Country of Origin
🇺🇸 United States

Page Count
19 pages

Category
Computer Science:
Computation and Language