Score: 2

Beyond Static Evaluation: Rethinking the Assessment of Personalized Agent Adaptability in Information Retrieval

Published: October 5, 2025 | arXiv ID: 2510.03984v1

By: Kirandeep Kaur , Preetam Prabhu Srikar Dammu , Hideo Joho and more

BigTech Affiliations: University of Washington

Potential Business Impact:

Helps AI learn what you like over time.

Business Areas:
Personalization Commerce and Shopping

Personalized AI agents are becoming central to modern information retrieval, yet most evaluation methodologies remain static, relying on fixed benchmarks and one-off metrics that fail to reflect how users' needs evolve over time. These limitations hinder our ability to assess whether agents can meaningfully adapt to individuals across dynamic, longitudinal interactions. In this perspective paper, we propose a conceptual lens for rethinking evaluation in adaptive personalization, shifting the focus from static performance snapshots to interaction-aware, evolving assessments. We organize this lens around three core components: (1) persona-based user simulation with temporally evolving preference models; (2) structured elicitation protocols inspired by reference interviews to extract preferences in context; and (3) adaptation-aware evaluation mechanisms that measure how agent behavior improves across sessions and tasks. While recent works have embraced LLM-driven user simulation, we situate this practice within a broader paradigm for evaluating agents over time. To illustrate our ideas, we conduct a case study in e-commerce search using the PersonalWAB dataset. Beyond presenting a framework, our work lays a conceptual foundation for understanding and evaluating personalization as a continuous, user-centric endeavor.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡―πŸ‡΅ Japan, United States

Page Count
10 pages

Category
Computer Science:
Information Retrieval