Score: 0

Exploring Autonomous Agents: A Closer Look at Why They Fail When Completing Tasks

Published: August 18, 2025 | arXiv ID: 2508.13143v1

By: Ruofan Lu, Yichen Li, Yintong Huo

Potential Business Impact:

Tests AI helpers to make them smarter.

Autonomous agent systems powered by Large Language Models (LLMs) have demonstrated promising capabilities in automating complex tasks. However, current evaluations largely rely on success rates without systematically analyzing the interactions, communication mechanisms, and failure causes within these systems. To bridge this gap, we present a benchmark of 34 representative programmable tasks designed to rigorously assess autonomous agents. Using this benchmark, we evaluate three popular open-source agent frameworks combined with two LLM backbones, observing a task completion rate of approximately 50%. Through in-depth failure analysis, we develop a three-tier taxonomy of failure causes aligned with task phases, highlighting planning errors, task execution issues, and incorrect response generation. Based on these insights, we propose actionable improvements to enhance agent planning and self-diagnosis capabilities. Our failure taxonomy, together with mitigation advice, provides an empirical foundation for developing more robust and effective autonomous agent systems in the future.

Country of Origin
πŸ‡ΈπŸ‡¬ Singapore

Page Count
5 pages

Category
Computer Science:
Artificial Intelligence