Learning with Expert Abstractions for Efficient Multi-Task Continuous Control
By: Jeff Jewett, Sandhya Saisubramanian
Potential Business Impact:
Teaches robots to learn new tasks faster.
Decision-making in complex, continuous multi-task environments is often hindered by the difficulty of obtaining accurate models for planning and the inefficiency of learning purely from trial and error. While precise environment dynamics may be hard to specify, human experts can often provide high-fidelity abstractions that capture the essential high-level structure of a task and user preferences in the target environment. Existing hierarchical approaches often target discrete settings and do not generalize across tasks. We propose a hierarchical reinforcement learning approach that addresses these limitations by dynamically planning over the expert-specified abstraction to generate subgoals to learn a goal-conditioned policy. To overcome the challenges of learning under sparse rewards, we shape the reward based on the optimal state value in the abstract model. This structured decision-making process enhances sample efficiency and facilitates zero-shot generalization. Our empirical evaluation on a suite of procedurally generated continuous control environments demonstrates that our approach outperforms existing hierarchical reinforcement learning methods in terms of sample efficiency, task completion rate, scalability to complex tasks, and generalization to novel scenarios.
Similar Papers
Hierarchical Reinforcement Learning with Low-Level MPC for Multi-Agent Control
Systems and Control
Helps robots learn to move safely together.
Extensive Exploration in Complex Traffic Scenarios using Hierarchical Reinforcement Learning
Machine Learning (CS)
Teaches cars to drive safely in tricky traffic.
Brain-Inspired Planning for Better Generalization in Reinforcement Learning
Artificial Intelligence
Teaches robots to plan and learn like people.