Solving Robotics Tasks with Prior Demonstration via Exploration-Efficient Deep Reinforcement Learning
By: Chengyandan Shen, Christoffer Sloth
Potential Business Impact:
Teaches robots to learn tasks faster and better.
This paper proposes an exploration-efficient Deep Reinforcement Learning with Reference policy (DRLR) framework for learning robotics tasks that incorporates demonstrations. The DRLR framework is developed based on an algorithm called Imitation Bootstrapped Reinforcement Learning (IBRL). We propose to improve IBRL by modifying the action selection module. The proposed action selection module provides a calibrated Q-value, which mitigates the bootstrapping error that otherwise leads to inefficient exploration. Furthermore, to prevent the RL policy from converging to a sub-optimal policy, SAC is used as the RL policy instead of TD3. The effectiveness of our method in mitigating bootstrapping error and preventing overfitting is empirically validated by learning two robotics tasks: bucket loading and open drawer, which require extensive interactions with the environment. Simulation results also demonstrate the robustness of the DRLR framework across tasks with both low and high state-action dimensions, and varying demonstration qualities. To evaluate the developed framework on a real-world industrial robotics task, the bucket loading task is deployed on a real wheel loader. The sim2real results validate the successful deployment of the DRLR framework.
Similar Papers
An Introduction to Deep Reinforcement and Imitation Learning
Robotics
Teaches robots to learn by watching and trying.
Bootstrapping Reinforcement Learning with Sub-optimal Policies for Autonomous Driving
Robotics
Teaches self-driving cars to learn faster.
Quantum deep reinforcement learning for humanoid robot navigation task
Robotics
Robots learn to walk faster using quantum power.