Dual Goal Representations
By: Seohong Park, Deepinder Mann, Sergey Levine
Potential Business Impact:
Teaches robots how to reach any goal.
In this work, we introduce dual goal representations for goal-conditioned reinforcement learning (GCRL). A dual goal representation characterizes a state by "the set of temporal distances from all other states"; in other words, it encodes a state through its relations to every other state, measured by temporal distance. This representation provides several appealing theoretical properties. First, it depends only on the intrinsic dynamics of the environment and is invariant to the original state representation. Second, it contains provably sufficient information to recover an optimal goal-reaching policy, while being able to filter out exogenous noise. Based on this concept, we develop a practical goal representation learning method that can be combined with any existing GCRL algorithm. Through diverse experiments on the OGBench task suite, we empirically show that dual goal representations consistently improve offline goal-reaching performance across 20 state- and pixel-based tasks.
Similar Papers
Offline Goal-conditioned Reinforcement Learning with Quasimetric Representations
Machine Learning (CS)
Teaches robots to reach goals better, even with mistakes.
General and Efficient Visual Goal-Conditioned Reinforcement Learning using Object-Agnostic Masks
CV and Pattern Recognition
Teaches robots to grab any object without knowing its location.
Test-Time Graph Search for Goal-Conditioned Reinforcement Learning
Machine Learning (CS)
Helps robots learn to reach goals without practice.