Survey on Evaluation of LLM-based Agents
By: Asaf Yehudai , Lilach Eden , Alan Li and more
Potential Business Impact:
Tests how smart AI agents can act and learn.
The emergence of LLM-based agents represents a paradigm shift in AI, enabling autonomous systems to plan, reason, use tools, and maintain memory while interacting with dynamic environments. This paper provides the first comprehensive survey of evaluation methodologies for these increasingly capable agents. We systematically analyze evaluation benchmarks and frameworks across four critical dimensions: (1) fundamental agent capabilities, including planning, tool use, self-reflection, and memory; (2) application-specific benchmarks for web, software engineering, scientific, and conversational agents; (3) benchmarks for generalist agents; and (4) frameworks for evaluating agents. Our analysis reveals emerging trends, including a shift toward more realistic, challenging evaluations with continuously updated benchmarks. We also identify critical gaps that future research must address-particularly in assessing cost-efficiency, safety, and robustness, and in developing fine-grained, and scalable evaluation methods. This survey maps the rapidly evolving landscape of agent evaluation, reveals the emerging trends in the field, identifies current limitations, and proposes directions for future research.
Similar Papers
Evolutionary Perspectives on the Evaluation of LLM-Based AI Agents: A Comprehensive Survey
Computation and Language
Helps AI agents perform better than chatbots.
From LLM Reasoning to Autonomous AI Agents: A Comprehensive Review
Artificial Intelligence
Organizes AI tests and tools for better understanding.
Benchmarking and Studying the LLM-based Agent System in End-to-End Software Development
Software Engineering
Helps AI build better computer programs.