Test-Time Graph Search for Goal-Conditioned Reinforcement Learning
By: Evgenii Opryshko , Junwei Quan , Claas Voelcker and more
Potential Business Impact:
Helps robots learn to reach goals without practice.
Offline goal-conditioned reinforcement learning (GCRL) trains policies that reach user-specified goals at test time, providing a simple, unsupervised, domain-agnostic way to extract diverse behaviors from unlabeled, reward-free datasets. Nonetheless, long-horizon decision making remains difficult for GCRL agents due to temporal credit assignment and error accumulation, and the offline setting amplifies these effects. To alleviate this issue, we introduce Test-Time Graph Search (TTGS), a lightweight planning approach to solve the GCRL task. TTGS accepts any state-space distance or cost signal, builds a weighted graph over dataset states, and performs fast search to assemble a sequence of subgoals that a frozen policy executes. When the base learner is value-based, the distance is derived directly from the learned goal-conditioned value function, so no handcrafted metric is needed. TTGS requires no changes to training, no additional supervision, no online interaction, and no privileged information, and it runs entirely at inference. On the OGBench benchmark, TTGS improves success rates of multiple base learners on challenging locomotion tasks, demonstrating the benefit of simple metric-guided test-time planning for offline GCRL.
Similar Papers
Graph-Assisted Stitching for Offline Hierarchical Reinforcement Learning
Machine Learning (CS)
Helps robots learn tasks much faster.
Offline Goal-conditioned Reinforcement Learning with Quasimetric Representations
Machine Learning (CS)
Teaches robots to reach goals better, even with mistakes.
Dual Goal Representations
Machine Learning (CS)
Teaches robots how to reach any goal.