Physics-informed Value Learner for Offline Goal-Conditioned Reinforcement Learning
By: Vittorio Giammarino, Ruiqi Ni, Ahmed H. Qureshi
Potential Business Impact:
Teaches robots to navigate complex places safely.
Offline Goal-Conditioned Reinforcement Learning (GCRL) holds great promise for domains such as autonomous navigation and locomotion, where collecting interactive data is costly and unsafe. However, it remains challenging in practice due to the need to learn from datasets with limited coverage of the state-action space and to generalize across long-horizon tasks. To improve on these challenges, we propose a Physics-informed (Pi) regularized loss for value learning, derived from the Eikonal Partial Differential Equation (PDE) and which induces a geometric inductive bias in the learned value function. Unlike generic gradient penalties that are primarily used to stabilize training, our formulation is grounded in continuous-time optimal control and encourages value functions to align with cost-to-go structures. The proposed regularizer is broadly compatible with temporal-difference-based value learning and can be integrated into existing Offline GCRL algorithms. When combined with Hierarchical Implicit Q-Learning (HIQL), the resulting method, Physics-informed HIQL (Pi-HIQL), yields significant improvements in both performance and generalization, with pronounced gains in stitching regimes and large-scale navigation tasks.
Similar Papers
Goal Reaching with Eikonal-Constrained Hierarchical Quasimetric Reinforcement Learning
Machine Learning (CS)
Teaches robots to reach goals without mistakes.
Physics-Informed Regression: Parameter Estimation in Parameter-Linear Nonlinear Dynamic Models
Machine Learning (CS)
Finds hidden rules in science data faster.
Physically-Grounded Goal Imagination: Physics-Informed Variational Autoencoder for Self-Supervised Reinforcement Learning
Robotics
Robots learn new skills by imagining realistic goals.