Hierarchical Reinforcement Learning with Targeted Causal Interventions
By: Sadegh Khorasani , Saber Salehkaleybar , Negar Kiyavash and more
Potential Business Impact:
Teaches robots to learn tasks faster.
Hierarchical reinforcement learning (HRL) improves the efficiency of long-horizon reinforcement-learning tasks with sparse rewards by decomposing the task into a hierarchy of subgoals. The main challenge of HRL is efficient discovery of the hierarchical structure among subgoals and utilizing this structure to achieve the final goal. We address this challenge by modeling the subgoal structure as a causal graph and propose a causal discovery algorithm to learn it. Additionally, rather than intervening on the subgoals at random during exploration, we harness the discovered causal model to prioritize subgoal interventions based on their importance in attaining the final goal. These targeted interventions result in a significantly more efficient policy in terms of the training cost. Unlike previous work on causal HRL, which lacked theoretical analysis, we provide a formal analysis of the problem. Specifically, for tree structures and, for a variant of Erd\H{o}s-R\'enyi random graphs, our approach results in remarkable improvements. Our experimental results on HRL tasks also illustrate that our proposed framework outperforms existing work in terms of training cost.
Similar Papers
Hierarchical Reinforcement Learning in Multi-Goal Spatial Navigation with Autonomous Mobile Robots
Artificial Intelligence
Robots learn to navigate complex places faster.
D3HRL: A Distributed Hierarchical Reinforcement Learning Approach Based on Causal Discovery and Spurious Correlation Detection
Machine Learning (CS)
Teaches robots to make better choices by understanding cause and effect.
Reinforcement Learning with Anticipation: A Hierarchical Approach for Long-Horizon Tasks
Machine Learning (CS)
Helps robots learn long, hard tasks by breaking them down.