Exploring Autonomous Agents: A Closer Look at Why They Fail When Completing Tasks
By: Ruofan Lu, Yichen Li, Yintong Huo
Potential Business Impact:
Tests AI helpers to make them smarter.
Autonomous agent systems powered by Large Language Models (LLMs) have demonstrated promising capabilities in automating complex tasks. However, current evaluations largely rely on success rates without systematically analyzing the interactions, communication mechanisms, and failure causes within these systems. To bridge this gap, we present a benchmark of 34 representative programmable tasks designed to rigorously assess autonomous agents. Using this benchmark, we evaluate three popular open-source agent frameworks combined with two LLM backbones, observing a task completion rate of approximately 50%. Through in-depth failure analysis, we develop a three-tier taxonomy of failure causes aligned with task phases, highlighting planning errors, task execution issues, and incorrect response generation. Based on these insights, we propose actionable improvements to enhance agent planning and self-diagnosis capabilities. Our failure taxonomy, together with mitigation advice, provides an empirical foundation for developing more robust and effective autonomous agent systems in the future.
Similar Papers
How Do LLMs Fail In Agentic Scenarios? A Qualitative Analysis of Success and Failure Scenarios of Various LLMs in Agentic Simulations
Artificial Intelligence
Helps AI agents use tools more reliably.
From Language to Action: A Review of Large Language Models as Autonomous Agents and Tool Users
Computation and Language
AI learns to think, plan, and improve itself.
Fundamentals of Building Autonomous LLM Agents
Artificial Intelligence
Lets computers do complex jobs like people.