Beyond Reactivity: Measuring Proactive Problem Solving in LLM Agents
By: Gil Pasternak , Dheeraj Rajagopal , Julia White and more
Potential Business Impact:
Helps computers solve problems before you ask.
LLM-based agents are increasingly moving towards proactivity: rather than awaiting instruction, they exercise agency to anticipate user needs and solve them autonomously. However, evaluating proactivity is challenging; current benchmarks are constrained to localized context, limiting their ability to test reasoning across sources and longer time horizons. To address this gap, we present PROBE (Proactive Resolution Of BottlEnecks). PROBE decomposes proactivity as a pipeline of three core capabilities: (1) searching for unspecified issues, (2) identifying specific bottlenecks, and (3) executing appropriate resolutions. We apply PROBE to evaluate leading LLMs and popular agentic frameworks, showing that even state-of-the-art models struggle to solve this benchmark. Computing our consistent measurements across frontier LLMs and agents, we find that the best end-to-end performance of 40% is achieved by both GPT-5 and Claude Opus-4.1. Additionally, we demonstrate the relative capabilities of each model and analyze mutual failure modes. Our results highlight the current limitations of autonomous action in agentic systems, and expose promising future research directions.
Similar Papers
ProAgent: Harnessing On-Demand Sensory Contexts for Proactive LLM Agent Systems
Artificial Intelligence
Helps smart glasses help you before you ask.
Training Proactive and Personalized LLM Agents
Artificial Intelligence
AI learns to ask questions and help better.
ProactiveEval: A Unified Evaluation Framework for Proactive Dialogue Agents
Computation and Language
Tests how well AI starts conversations.