DEER: A Comprehensive and Reliable Benchmark for Deep-Research Expert Reports
By: Janghoon Han , Heegyu Kim , Changho Lee and more
Potential Business Impact:
Tests if AI reports are true and good.
As large language models (LLMs) advance, deep research systems can generate expert-level reports via multi-step reasoning and evidence-based synthesis, but evaluating such reports remains challenging. Existing benchmarks often lack systematic criteria for expert reporting, evaluations that rely heavily on LLM judges can fail to capture issues that require expert judgment, and source verification typically covers only a limited subset of explicitly cited statements rather than report-wide factual reliability. We introduce DEER, a benchmark for evaluating expert-level deep research reports. DEER comprises 50 report-writing tasks spanning 13 domains and an expert-grounded evaluation taxonomy (7 dimensions, 25 sub-dimension) operationalized into 130 fine-grained rubric items. DEER further provides task-specific expert guidance to help LLM judges assess expert-level report quality more consistently. Complementing rubric-based assessment, we propose a document-level fact-checking architecture that extracts and verifies all claims across the entire report, including both cited and uncited ones, and quantifies external-evidence quality. DEER correlates closely with human expert judgments and yields interpretable diagnostics of system strengths and weaknesses.
Similar Papers
ReportBench: Evaluating Deep Research Agents via Academic Survey Tasks
Computation and Language
Tests if AI reports are true and useful.
How Far Are We from Genuinely Useful Deep Research Agents?
Computation and Language
Helps computers write better research reports.
DeepResearch Bench: A Comprehensive Benchmark for Deep Research Agents
Computation and Language
Tests AI that writes research reports like a human.