Score: 2

SokoBench: Evaluating Long-Horizon Planning and Reasoning in Large Language Models

Published: January 28, 2026 | arXiv ID: 2601.20856v1

By: Sebastiano Monti , Carlo Nicolini , Gianni Pellegrini and more

Potential Business Impact:

Computers struggle to plan many steps ahead.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Although the capabilities of large language models have been increasingly tested on complex reasoning tasks, their long-horizon planning abilities have not yet been extensively investigated. In this work, we provide a systematic assessment of the planning and long-horizon reasoning capabilities of state-of-the-art Large Reasoning Models (LRMs). We propose a novel benchmark based on Sokoban puzzles, intentionally simplified to isolate long-horizon planning from state persistence. Our findings reveal a consistent degradation in planning performance when more than 25 moves are required to reach the solution, suggesting a fundamental constraint on forward planning capacity. We show that equipping LRMs with Planning Domain Definition Language (PDDL) parsing, validation, and solving tools allows for modest improvements, suggesting inherent architectural limitations which might not be overcome by test-time scaling approaches alone.

Country of Origin
🇮🇹 Italy

Repos / Data Links

Page Count
18 pages

Category
Computer Science:
Artificial Intelligence