AstroReason-Bench: Evaluating Unified Agentic Planning across Heterogeneous Space Planning Problems
By: Weiyi Wang , Xinchi Chen , Jingjing Gong and more
Potential Business Impact:
Helps robots plan space missions better.
Recent advances in agentic Large Language Models (LLMs) have positioned them as generalist planners capable of reasoning and acting across diverse tasks. However, existing agent benchmarks largely focus on symbolic or weakly grounded environments, leaving their performance in physics-constrained real-world domains underexplored. We introduce AstroReason-Bench, a comprehensive benchmark for evaluating agentic planning in Space Planning Problems (SPP), a family of high-stakes problems with heterogeneous objectives, strict physical constraints, and long-horizon decision-making. AstroReason-Bench integrates multiple scheduling regimes, including ground station communication and agile Earth observation, and provides a unified agent-oriented interaction protocol. Evaluating on a range of state-of-the-art open- and closed-source agentic LLM systems, we find that current agents substantially underperform specialized solvers, highlighting key limitations of generalist planning under realistic constraints. AstroReason-Bench offers a challenging and diagnostic testbed for future agentic research.
Similar Papers
PLANET: A Collection of Benchmarks for Evaluating LLMs' Planning Capabilities
Artificial Intelligence
Helps AI plan better for tasks and games.
AgencyBench: Benchmarking the Frontiers of Autonomous Agents in 1M-Token Real-World Contexts
Artificial Intelligence
Tests AI agents on real-world tasks.
CubeBench: Diagnosing Interactive, Long-Horizon Spatial Reasoning Under Partial Observations
Artificial Intelligence
Teaches robots to solve puzzles in the real world.