Deep Research Bench: Evaluating AI Web Research Agents
By: FutureSearch , : , Nikos I. Bosse and more
Potential Business Impact:
Tests AI's ability to find answers online.
Amongst the most common use cases of modern AI is LLM chat with web search enabled. However, no direct evaluations of the quality of web research agents exist that control for the continually-changing web. We introduce Deep Research Bench, consisting of 89 multi-step web research task instances of varying difficulty across 8 diverse task categories, with the answers carefully worked out by skilled humans. We provide a "RetroSearch" environment with a large frozen set of scraped web pages, and demonstrate that offline "RetroSearch" agents perform comparably to "live web" agents, enabling reliable evaluations of models over time. We provide robust agent tooling and scaffolding to benchmark major LLMs as they are released, including "thinking" models like o3 and Gemini 2.5 Pro. We include automated evaluations of the lengthy agent traces to report progress over time in hallucinations, tool use, and forgetting. Finally, we evaluate the major web research products branded as "Deep Research", "Deep Search", "Search", or "Research." Results are available on a public leaderboard at https://drb.futuresearch.ai/.
Similar Papers
DeepResearch Bench: A Comprehensive Benchmark for Deep Research Agents
Computation and Language
Tests AI that writes research reports like a human.
DeepShop: A Benchmark for Deep Research Shopping Agents
Information Retrieval
Helps online shoppers find exactly what they want.
ReportBench: Evaluating Deep Research Agents via Academic Survey Tasks
Computation and Language
Tests if AI reports are true and useful.