PLANET: A Collection of Benchmarks for Evaluating LLMs' Planning Capabilities
By: Haoming Li , Zhaoliang Chen , Jonathan Zhang and more
Potential Business Impact:
Helps AI plan better for tasks and games.
Planning is central to agents and agentic AI. The ability to plan, e.g., creating travel itineraries within a budget, holds immense potential in both scientific and commercial contexts. Moreover, optimal plans tend to require fewer resources compared to ad-hoc methods. To date, a comprehensive understanding of existing planning benchmarks appears to be lacking. Without it, comparing planning algorithms' performance across domains or selecting suitable algorithms for new scenarios remains challenging. In this paper, we examine a range of planning benchmarks to identify commonly used testbeds for algorithm development and highlight potential gaps. These benchmarks are categorized into embodied environments, web navigation, scheduling, games and puzzles, and everyday task automation. Our study recommends the most appropriate benchmarks for various algorithms and offers insights to guide future benchmark development.
Similar Papers
REALM-Bench: A Benchmark for Evaluating Multi-Agent Systems on Real-world, Dynamic Planning and Scheduling Tasks
Artificial Intelligence
Tests AI to plan and fix problems better.
CostBench: Evaluating Multi-Turn Cost-Optimal Planning and Adaptation in Dynamic Environments for LLM Tool-Use Agents
Artificial Intelligence
Helps AI plan cheaper trips by learning from mistakes.
UrbanPlanBench: A Comprehensive Urban Planning Benchmark for Evaluating Large Language Models
Computation and Language
Helps city planners use AI for better towns.