Reinforcement Learning via Implicit Imitation Guidance
By: Perry Dong , Alec M. Lessing , Annie S. Chen and more
Potential Business Impact:
Teaches robots new skills faster with smart guesses.
We study the problem of sample efficient reinforcement learning, where prior data such as demonstrations are provided for initialization in lieu of a dense reward signal. A natural approach is to incorporate an imitation learning objective, either as regularization during training or to acquire a reference policy. However, imitation learning objectives can ultimately degrade long-term performance, as it does not directly align with reward maximization. In this work, we propose to use prior data solely for guiding exploration via noise added to the policy, sidestepping the need for explicit behavior cloning constraints. The key insight in our framework, Data-Guided Noise (DGN), is that demonstrations are most useful for identifying which actions should be explored, rather than forcing the policy to take certain actions. Our approach achieves up to 2-3x improvement over prior reinforcement learning from offline data methods across seven simulated continuous control tasks.
Similar Papers
CCDP: Composition of Conditional Diffusion Policies with Guided Sampling
Robotics
Robots learn to fix mistakes without trying again.
Steering Your Diffusion Policy with Latent Space Reinforcement Learning
Robotics
Robots learn to improve by themselves.
Model Predictive Adversarial Imitation Learning for Planning from Observation
Robotics
Teaches robots to plan and learn from watching.