DeepResearchEval: An Automated Framework for Deep Research Task Construction and Agentic Evaluation
By: Yibo Wang , Lei Wang , Yue Deng and more
Deep research systems are widely used for multi-step web research, analysis, and cross-source synthesis, yet their evaluation remains challenging. Existing benchmarks often require annotation-intensive task construction, rely on static evaluation dimensions, or fail to reliably verify facts when citations are missing. To bridge these gaps, we introduce DeepResearchEval, an automated framework for deep research task construction and agentic evaluation. For task construction, we propose a persona-driven pipeline generating realistic, complex research tasks anchored in diverse user profiles, applying a two-stage filter Task Qualification and Search Necessity to retain only tasks requiring multi-source evidence integration and external retrieval. For evaluation, we propose an agentic pipeline with two components: an Adaptive Point-wise Quality Evaluation that dynamically derives task-specific evaluation dimensions, criteria, and weights conditioned on each generated task, and an Active Fact-Checking that autonomously extracts and verifies report statements via web search, even when citations are missing.
Similar Papers
Deep Research: A Survey of Autonomous Research Agents
Information Retrieval
Helps AI find and use information from the internet.
ReportBench: Evaluating Deep Research Agents via Academic Survey Tasks
Computation and Language
Tests if AI reports are true and useful.
DeepResearch Bench: A Comprehensive Benchmark for Deep Research Agents
Computation and Language
Tests AI that writes research reports like a human.