Benchmark for Planning and Control with Large Language Model Agents: Blocksworld with Model Context Protocol
By: Niklas Jobs , Luis Miguel Vieira da Silva , Jayanth Somashekaraiah and more
Potential Business Impact:
Tests how smart robots learn new jobs.
Industrial automation increasingly requires flexible control strategies that can adapt to changing tasks and environments. Agents based on Large Language Models (LLMs) offer potential for such adaptive planning and execution but lack standardized benchmarks for systematic comparison. We introduce a benchmark with an executable simulation environment representing the Blocksworld problem providing five complexity categories. By integrating the Model Context Protocol (MCP) as a standardized tool interface, diverse agent architectures can be connected to and evaluated against the benchmark without implementation-specific modifications. A single-agent implementation demonstrates the benchmark's applicability, establishing quantitative metrics for comparison of LLM-based planning and execution approaches.
Similar Papers
MCP-Universe: Benchmarking Large Language Models with Real-World Model Context Protocol Servers
Artificial Intelligence
Tests AI on hard, real-world tasks.
MCPToolBench++: A Large Scale AI Agent Model Context Protocol MCP Tool Use Benchmark
Artificial Intelligence
Tests how well AI uses real-world tools.
MCP-Bench: Benchmarking Tool-Using LLM Agents with Complex Real-World Tasks via MCP Servers
Computation and Language
Tests AI's ability to use many tools together.