IDRBench: Interactive Deep Research Benchmark
By: Yingchaojie Feng , Qiang Huang , Xiaoya Xie and more
Potential Business Impact:
Helps AI assistants learn better by asking questions.
Deep research agents powered by Large Language Models (LLMs) can perform multi-step reasoning, web exploration, and long-form report generation. However, most existing systems operate in an autonomous manner, assuming fully specified user intent and evaluating only final outputs. In practice, research goals are often underspecified and evolve during exploration, making sustained interaction essential for robust alignment. Despite its importance, interaction remains largely invisible to existing deep research benchmarks, which neither model dynamic user feedback nor quantify its costs. We introduce IDRBench, the first benchmark for systematically evaluating interactive deep research. IDRBench combines a modular multi-agent research framework with on-demand interaction, a scalable reference-grounded user simulator, and an interaction-aware evaluation suite that jointly measures interaction benefits (quality and alignment) and costs (turns and tokens). Experiments across seven state-of-the-art LLMs show that interaction consistently improves research quality and robustness, often outweighing differences in model capacity, while revealing substantial trade-offs in interaction efficiency.
Similar Papers
IDA-Bench: Evaluating LLMs on Interactive Guided Data Analysis
Computation and Language
Tests computers on tricky, step-by-step data problems.
Understanding Large Language Models' Ability on Interdisciplinary Research
Computation and Language
Helps computers invent new science ideas.
Dr.Mi-Bench: A Modular-integrated Benchmark for Scientific Deep Research Agent
Computation and Language
Tests AI that reads science papers better.