DeepSynth-Eval: Objectively Evaluating Information Consolidation in Deep Survey Writing
By: Hongzhi Zhang , Yuanze Hu , Tinghai Zhang and more
Potential Business Impact:
Helps AI write better reports from lots of information.
The evolution of Large Language Models (LLMs) towards autonomous agents has catalyzed progress in Deep Research. While retrieval capabilities are well-benchmarked, the post-retrieval synthesis stage--where agents must digest massive amounts of context and consolidate fragmented evidence into coherent, long-form reports--remains under-evaluated due to the subjectivity of open-ended writing. To bridge this gap, we introduce DeepSynth-Eval, a benchmark designed to objectively evaluate information consolidation capabilities. We leverage high-quality survey papers as gold standards, reverse-engineering research requests and constructing "Oracle Contexts" from their bibliographies to isolate synthesis from retrieval noise. We propose a fine-grained evaluation protocol using General Checklists (for factual coverage) and Constraint Checklists (for structural organization), transforming subjective judgment into verifiable metrics. Experiments across 96 tasks reveal that synthesizing information from hundreds of references remains a significant challenge. Our results demonstrate that agentic plan-and-write workflows significantly outperform single-turn generation, effectively reducing hallucinations and improving adherence to complex structural constraints.
Similar Papers
SurveyEval: Towards Comprehensive Evaluation of LLM-Generated Academic Surveys
Computation and Language
Tests how well computers write survey answers.
DeepScholar-Bench: A Live Benchmark and Automated Evaluation for Generative Research Synthesis
Computation and Language
Helps computers write research papers by finding and summarizing info.
SynClaimEval: A Framework for Evaluating the Utility of Synthetic Data in Long-Context Claim Verification
Computation and Language
Makes AI better at checking facts in long texts.