Multistep Quasimetric Learning for Scalable Goal-conditioned Reinforcement Learning
By: Bill Chunyuan Zheng , Vivek Myers , Benjamin Eysenbach and more
Potential Business Impact:
Teaches robots to do tasks by watching.
Learning how to reach goals in an environment is a longstanding challenge in AI, yet reasoning over long horizons remains a challenge for modern methods. The key question is how to estimate the temporal distance between pairs of observations. While temporal difference methods leverage local updates to provide optimality guarantees, they often perform worse than Monte Carlo methods that perform global updates (e.g., with multi-step returns), which lack such guarantees. We show how these approaches can be integrated into a practical GCRL method that fits a quasimetric distance using a multistep Monte-Carlo return. We show our method outperforms existing GCRL methods on long-horizon simulated tasks with up to 4000 steps, even with visual observations. We also demonstrate that our method can enable stitching in the real-world robotic manipulation domain (Bridge setup). Our approach is the first end-to-end GCRL method that enables multistep stitching in this real-world manipulation domain from an unlabeled offline dataset of visual observations.
Similar Papers
Multistep Quasimetric Learning for Scalable Goal-conditioned Reinforcement Learning
Machine Learning (CS)
Helps robots learn long tasks from watching.
Offline Goal-conditioned Reinforcement Learning with Quasimetric Representations
Machine Learning (CS)
Teaches robots to reach goals better, even with mistakes.
Goal Reaching with Eikonal-Constrained Hierarchical Quasimetric Reinforcement Learning
Machine Learning (CS)
Teaches robots to reach goals without mistakes.