Score: 1

Dual Goal Representations

Published: October 8, 2025 | arXiv ID: 2510.06714v1

By: Seohong Park, Deepinder Mann, Sergey Levine

BigTech Affiliations: University of California, Berkeley

Potential Business Impact:

Teaches robots how to reach any goal.

Business Areas:
Gamification Gaming

In this work, we introduce dual goal representations for goal-conditioned reinforcement learning (GCRL). A dual goal representation characterizes a state by "the set of temporal distances from all other states"; in other words, it encodes a state through its relations to every other state, measured by temporal distance. This representation provides several appealing theoretical properties. First, it depends only on the intrinsic dynamics of the environment and is invariant to the original state representation. Second, it contains provably sufficient information to recover an optimal goal-reaching policy, while being able to filter out exogenous noise. Based on this concept, we develop a practical goal representation learning method that can be combined with any existing GCRL algorithm. Through diverse experiments on the OGBench task suite, we empirically show that dual goal representations consistently improve offline goal-reaching performance across 20 state- and pixel-based tasks.

Country of Origin
🇺🇸 United States

Page Count
18 pages

Category
Computer Science:
Machine Learning (CS)