CubeBench: Diagnosing Interactive, Long-Horizon Spatial Reasoning Under Partial Observations
By: Huan-ang Gao , Zikang Zhang , Tianwei Luo and more
Potential Business Impact:
Helps robots understand and solve physical puzzles.
Large Language Model (LLM) agents, while proficient in the digital realm, face a significant gap in physical-world deployment due to the challenge of forming and maintaining a robust spatial mental model. We identify three core cognitive challenges hindering this transition: spatial reasoning, long-horizon state tracking via mental simulation, and active exploration under partial observation. To isolate and evaluate these faculties, we introduce CubeBench, a novel generative benchmark centered on the Rubik's Cube. CubeBench uses a three-tiered diagnostic framework that progressively assesses agent capabilities, from foundational state tracking with full symbolic information to active exploration with only partial visual data. Our experiments on leading LLMs reveal critical limitations, including a uniform 0.00% pass rate on all long-horizon tasks, exposing a fundamental failure in long-term planning. We also propose a diagnostic framework to isolate these cognitive bottlenecks by providing external solver tools. By analyzing the failure modes, we provide key insights to guide the development of more physically-grounded intelligent agents.
Similar Papers
Cube Bench: A Benchmark for Spatial Visual Reasoning in MLLMs
Computation and Language
Tests AI's ability to solve complex puzzles.
SpatialBench: Benchmarking Multimodal Large Language Models for Spatial Cognition
Artificial Intelligence
Tests how well computers understand space and plan.
From Indoor to Open World: Revealing the Spatial Reasoning Gap in MLLMs
CV and Pattern Recognition
Helps AI understand where things are in the real world.