Score: 1

Deep Research Comparator: A Platform For Fine-grained Human Annotations of Deep Research Agents

Published: July 7, 2025 | arXiv ID: 2507.05495v1

By: Prahaladh Chandrahasan , Jiahe Jin , Zhihan Zhang and more

BigTech Affiliations: Amazon

Potential Business Impact:

Helps judge how well AI finds and writes reports.

Business Areas:
Semantic Search Internet Services

Effectively evaluating deep research agents that autonomously search the web, analyze information, and generate reports remains a major challenge, particularly when it comes to assessing long reports and giving detailed feedback on their intermediate steps. To address these gaps, we introduce Deep Research Comparator, a platform that offers a holistic framework for deep research agent hosting, side-by-side comparison, fine-grained human feedback collection, and ranking calculation. Given a user query, our platform displays the final reports from two different agents along with their intermediate steps during generation. Annotators can evaluate the overall quality of final reports based on side-by-side comparison, and also provide detailed feedback separately by assessing intermediate steps or specific text spans within the final report. Furthermore, we develop Simple Deepresearch, an end-to-end agent scaffold. This scaffold serves as a baseline that facilitates the easy integration of various large language models to transform them into deep research agents for evaluation. To demonstrate the platform's utility for deep research agent development, we have collected real user preference data from 17 annotators on three deep research agents. A demo video of our platform can be found at https://www.youtube.com/watch?v=g4d2dnbdseg.

Country of Origin
🇺🇸 United States

Page Count
11 pages

Category
Computer Science:
Artificial Intelligence