Towards Personalized Deep Research: Benchmarks and Evaluations
By: Yuan Liang , Jiaxian Li , Yuqing Wang and more
Potential Business Impact:
AI assistants learn what you need for research.
Deep Research Agents (DRAs) can autonomously conduct complex investigations and generate comprehensive reports, demonstrating strong real-world potential. However, existing evaluations mostly rely on close-ended benchmarks, while open-ended deep research benchmarks remain scarce and typically neglect personalized scenarios. To bridge this gap, we introduce Personalized Deep Research Bench, the first benchmark for evaluating personalization in DRAs. It pairs 50 diverse research tasks across 10 domains with 25 authentic user profiles that combine structured persona attributes with dynamic real-world contexts, yielding 250 realistic user-task queries. To assess system performance, we propose the PQR Evaluation Framework, which jointly measures (P) Personalization Alignment, (Q) Content Quality, and (R) Factual Reliability. Our experiments on a range of systems highlight current capabilities and limitations in handling personalized deep research. This work establishes a rigorous foundation for developing and evaluating the next generation of truly personalized AI research assistants.
Similar Papers
DeepResearch Bench: A Comprehensive Benchmark for Deep Research Agents
Computation and Language
Tests AI that writes research reports like a human.
A Rigorous Benchmark with Multidimensional Evaluation for Deep Research Agents: From Answers to Reports
Artificial Intelligence
Helps AI agents solve hard problems better.
How Far Are We from Genuinely Useful Deep Research Agents?
Computation and Language
Helps computers write better research reports.