Learning to Plan, Planning to Learn: Adaptive Hierarchical RL-MPC for Sample-Efficient Decision Making
By: Toshiaki Hori , Jonathan DeCastro , Deepak Gopinath and more
We propose a new approach for solving planning problems with a hierarchical structure, fusing reinforcement learning and MPC planning. Our formulation tightly and elegantly couples the two planning paradigms. It leverages reinforcement learning actions to inform the MPPI sampler, and adaptively aggregates MPPI samples to inform the value estimation. The resulting adaptive process leverages further MPPI exploration where value estimates are uncertain, and improves training robustness and the overall resulting policies. This results in a robust planning approach that can handle complex planning problems and easily adapts to different applications, as demonstrated over several domains, including race driving, modified Acrobot, and Lunar Lander with added obstacles. Our results in these domains show better data efficiency and overall performance in terms of both rewards and task success, with up to a 72% increase in success rate compared to existing approaches, as well as accelerated convergence (x2.1) compared to non-adaptive sampling.
Similar Papers
Hierarchical Reinforcement Learning with Low-Level MPC for Multi-Agent Control
Systems and Control
Helps robots learn to move safely together.
Reinforcement Learning-based Dynamic Adaptation for Sampling-Based Motion Planning in Agile Autonomous Driving
Robotics
Makes race cars drive faster and safer.
Proposing Hierarchical Goal-Conditioned Policy Planning in Multi-Goal Reinforcement Learning
Artificial Intelligence
Robots learn hard tasks faster by planning ahead.