The PROPER Approach to Proactivity: Benchmarking and Advancing Knowledge Gap Navigation
By: Kirandeep Kaur , Vinayak Gupta , Aditya Gupta and more
Most language-based assistants follow a reactive ask-and-respond paradigm, requiring users to explicitly state their needs. As a result, relevant but unexpressed needs often go unmet. Existing proactive agents attempt to address this gap either by eliciting further clarification, preserving this burden, or by extrapolating future needs from context, often leading to unnecessary or mistimed interventions. We introduce ProPer, Proactivity-driven Personalized agents, a novel two-agent architecture consisting of a Dimension Generating Agent (DGA) and a Response Generating Agent (RGA). DGA, a fine-tuned LLM agent, leverages explicit user data to generate multiple implicit dimensions (latent aspects relevant to the user's task but not considered by the user) or knowledge gaps. These dimensions are selectively filtered using a reranker based on quality, diversity, and task relevance. RGA then balances explicit and implicit dimensions to tailor personalized responses with timely and proactive interventions. We evaluate ProPer across multiple domains using a structured, gap-aware rubric that measures coverage, initiative appropriateness, and intent alignment. Our results show that ProPer improves quality scores and win rates across all domains, achieving up to 84% gains in single-turn evaluation and consistent dominance in multi-turn interactions.
Similar Papers
Beyond Reactivity: Measuring Proactive Problem Solving in LLM Agents
Artificial Intelligence
Helps computers solve problems before you ask.
ProAgent: Harnessing On-Demand Sensory Contexts for Proactive LLM Agent Systems
Artificial Intelligence
Helps smart glasses help you before you ask.
Training Proactive and Personalized LLM Agents
Artificial Intelligence
AI learns to ask questions and help better.