Subgoal Graph-Augmented Planning for LLM-Guided Open-World Reinforcement Learning
By: Shanwei Fan
Potential Business Impact:
Helps robots follow plans by checking steps.
Large language models (LLMs) offer strong high-level planning capabilities for reinforcement learning (RL) by decomposing tasks into subgoals. However, their practical utility is limited by poor planning-execution alignment, which reflects a critical gap between abstract plans and actionable, environment-compatible behaviors. This misalignment arises from two interrelated limitations: (1) LLMs often produce subgoals that are semantically plausible but infeasible or irrelevant in the target environment due to insufficient grounding in environment-specific knowledge, and (2) single-LLM planning conflates generation with self-verification, resulting in overconfident yet unreliable subgoals that frequently fail during execution. To address these challenges, we propose Subgoal Graph-Augmented Actor-Critic-Refiner (SGA-ACR), a framework that integrates an environment-specific subgoal graph and structured entity knowledge with a multi-LLM planning pipeline that explicitly separates generation, critique, and refinement to produce executable and verifiable subgoals. A subgoal tracker further monitors execution progress, provides auxiliary rewards, and adaptively updates the subgoal graph to maintain alignment between plans and actions. Experimental results on 22 diverse tasks in the open-world game "Crafter" demonstrate the effectiveness of our proposed method.
Similar Papers
PilotRL: Training Language Model Agents via Global Planning-Guided Progressive Reinforcement Learning
Computation and Language
Helps AI agents plan and act better.
PilotRL: Training Language Model Agents via Global Planning-Guided Progressive Reinforcement Learning
Computation and Language
Helps AI agents plan and solve harder problems.
Context Matters! Relaxing Goals with LLMs for Feasible 3D Scene Planning
Robotics
Robots learn to do tasks even when things change.