Score: 0

DEER: A Comprehensive and Reliable Benchmark for Deep-Research Expert Reports

Published: December 19, 2025 | arXiv ID: 2512.17776v1

By: Janghoon Han , Heegyu Kim , Changho Lee and more

Potential Business Impact:

Tests if AI reports are true and good.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

As large language models (LLMs) advance, deep research systems can generate expert-level reports via multi-step reasoning and evidence-based synthesis, but evaluating such reports remains challenging. Existing benchmarks often lack systematic criteria for expert reporting, evaluations that rely heavily on LLM judges can fail to capture issues that require expert judgment, and source verification typically covers only a limited subset of explicitly cited statements rather than report-wide factual reliability. We introduce DEER, a benchmark for evaluating expert-level deep research reports. DEER comprises 50 report-writing tasks spanning 13 domains and an expert-grounded evaluation taxonomy (7 dimensions, 25 sub-dimension) operationalized into 130 fine-grained rubric items. DEER further provides task-specific expert guidance to help LLM judges assess expert-level report quality more consistently. Complementing rubric-based assessment, we propose a document-level fact-checking architecture that extracts and verifies all claims across the entire report, including both cited and uncited ones, and quantifies external-evidence quality. DEER correlates closely with human expert judgments and yields interpretable diagnostics of system strengths and weaknesses.

Page Count
31 pages

Category
Computer Science:
Computation and Language