DeepResearch Bench: A Comprehensive Benchmark for Deep Research Agents
By: Mingxuan Du , Benfeng Xu , Chiwei Zhu and more
Potential Business Impact:
Tests AI that writes research reports like a human.
Deep Research Agents are a prominent category of LLM-based agents. By autonomously orchestrating multistep web exploration, targeted retrieval, and higher-order synthesis, they transform vast amounts of online information into analyst-grade, citation-rich reports--compressing hours of manual desk research into minutes. However, a comprehensive benchmark for systematically evaluating the capabilities of these agents remains absent. To bridge this gap, we present DeepResearch Bench, a benchmark consisting of 100 PhD-level research tasks, each meticulously crafted by domain experts across 22 distinct fields. Evaluating DRAs is inherently complex and labor-intensive. We therefore propose two novel methodologies that achieve strong alignment with human judgment. The first is a reference-based method with adaptive criteria to assess the quality of generated research reports. The other framework is introduced to evaluate DRA's information retrieval and collection capabilities by assessing its effective citation count and overall citation accuracy. We have open-sourced DeepResearch Bench and key components of these frameworks at https://github.com/Ayanami0730/deep_research_bench to accelerate the development of practical LLM-based agents.
Similar Papers
Deep Research Bench: Evaluating AI Web Research Agents
Artificial Intelligence
Tests AI's ability to find answers online.
A Rigorous Benchmark with Multidimensional Evaluation for Deep Research Agents: From Answers to Reports
Artificial Intelligence
Helps AI agents solve hard problems better.
How Far Are We from Genuinely Useful Deep Research Agents?
Computation and Language
Helps computers write better research reports.