Hierarchical Reinforcement Learning with Uncertainty-Guided Diffusional Subgoals
By: Vivienne Huiling Wang, Tinghuai Wang, Joni Pajarinen
Potential Business Impact:
Teaches robots to learn complex tasks faster.
Hierarchical reinforcement learning (HRL) learns to make decisions on multiple levels of temporal abstraction. A key challenge in HRL is that the low-level policy changes over time, making it difficult for the high-level policy to generate effective subgoals. To address this issue, the high-level policy must capture a complex subgoal distribution while also accounting for uncertainty in its estimates. We propose an approach that trains a conditional diffusion model regularized by a Gaussian Process (GP) prior to generate a complex variety of subgoals while leveraging principled GP uncertainty quantification. Building on this framework, we develop a strategy that selects subgoals from both the diffusion policy and GP's predictive mean. Our approach outperforms prior HRL methods in both sample efficiency and performance on challenging continuous control benchmarks.
Similar Papers
Hierarchical Reinforcement Learning in Multi-Goal Spatial Navigation with Autonomous Mobile Robots
Artificial Intelligence
Robots learn to navigate complex places faster.
Goal-conditioned Hierarchical Reinforcement Learning for Sample-efficient and Safe Autonomous Driving at Intersections
Robotics
Teaches self-driving cars to avoid crashes.
Hierarchical Reinforcement Learning with Targeted Causal Interventions
Machine Learning (CS)
Teaches robots to learn tasks faster.