SokoBench: Evaluating Long-Horizon Planning and Reasoning in Large Language Models
By: Sebastiano Monti , Carlo Nicolini , Gianni Pellegrini and more
Potential Business Impact:
Computers struggle to plan many steps ahead.
Although the capabilities of large language models have been increasingly tested on complex reasoning tasks, their long-horizon planning abilities have not yet been extensively investigated. In this work, we provide a systematic assessment of the planning and long-horizon reasoning capabilities of state-of-the-art Large Reasoning Models (LRMs). We propose a novel benchmark based on Sokoban puzzles, intentionally simplified to isolate long-horizon planning from state persistence. Our findings reveal a consistent degradation in planning performance when more than 25 moves are required to reach the solution, suggesting a fundamental constraint on forward planning capacity. We show that equipping LRMs with Planning Domain Definition Language (PDDL) parsing, validation, and solving tools allows for modest improvements, suggesting inherent architectural limitations which might not be overcome by test-time scaling approaches alone.
Similar Papers
DeepPlanning: Benchmarking Long-Horizon Agentic Planning with Verifiable Constraints
Artificial Intelligence
Helps AI plan trips and shop better.
CubeBench: Diagnosing Interactive, Long-Horizon Spatial Reasoning Under Partial Observations
Artificial Intelligence
Helps robots understand and solve physical puzzles.
CubeBench: Diagnosing Interactive, Long-Horizon Spatial Reasoning Under Partial Observations
Artificial Intelligence
Helps robots understand and solve physical puzzles.