Score: 1

Test-Time Graph Search for Goal-Conditioned Reinforcement Learning

Published: October 8, 2025 | arXiv ID: 2510.07257v1

By: Evgenii Opryshko , Junwei Quan , Claas Voelcker and more

Potential Business Impact:

Helps robots learn to reach goals without practice.

Business Areas:
Skill Assessment Education

Offline goal-conditioned reinforcement learning (GCRL) trains policies that reach user-specified goals at test time, providing a simple, unsupervised, domain-agnostic way to extract diverse behaviors from unlabeled, reward-free datasets. Nonetheless, long-horizon decision making remains difficult for GCRL agents due to temporal credit assignment and error accumulation, and the offline setting amplifies these effects. To alleviate this issue, we introduce Test-Time Graph Search (TTGS), a lightweight planning approach to solve the GCRL task. TTGS accepts any state-space distance or cost signal, builds a weighted graph over dataset states, and performs fast search to assemble a sequence of subgoals that a frozen policy executes. When the base learner is value-based, the distance is derived directly from the learned goal-conditioned value function, so no handcrafted metric is needed. TTGS requires no changes to training, no additional supervision, no online interaction, and no privileged information, and it runs entirely at inference. On the OGBench benchmark, TTGS improves success rates of multiple base learners on challenging locomotion tasks, demonstrating the benefit of simple metric-guided test-time planning for offline GCRL.

Country of Origin
🇨🇦 Canada

Repos / Data Links

Page Count
14 pages

Category
Computer Science:
Machine Learning (CS)