On the Limits of Innate Planning in Large Language Models
By: Charles Schepanowski, Charles Ling
Potential Business Impact:
Computers struggle to solve puzzles without help.
Large language models (LLMs) achieve impressive results on many benchmarks, yet their capacity for planning and stateful reasoning remains unclear. We study these abilities directly, without code execution or other tools, using the 8-puzzle: a classic task that requires state tracking and goal-directed planning while allowing precise, step-by-step evaluation. Four models are tested under common prompting conditions (Zero-Shot, Chain-of-Thought, Algorithm-of-Thought) and with tiered corrective feedback. Feedback improves success rates for some model-prompt combinations, but many successful runs are long, computationally expensive, and indirect. We then examine the models with an external move validator that provides only valid moves. Despite this level of assistance, none of the models solve any puzzles in this setting. Qualitative analysis reveals two dominant deficits across all models: (1) brittle internal state representations, leading to frequent invalid moves, and (2) weak heuristic planning, with models entering loops or selecting actions that do not reduce the distance to the goal state. These findings indicate that, in the absence of external tools such as code interpreters, current LLMs have substantial limitations in planning and that further progress may require mechanisms for maintaining explicit state and performing structured search.
Similar Papers
Idea2Plan: Exploring AI-Powered Research Planning
Computation and Language
Helps computers plan science experiments from ideas.
How Far Are LLMs from Symbolic Planners? An NLP-Based Perspective
Artificial Intelligence
Fixes AI plans that make mistakes.
Exploring State Tracking Capabilities of Large Language Models
Computation and Language
Helps computers remember many things at once.