CARoL: Context-aware Adaptation for Robot Learning
By: Zechen Hu , Tong Xu , Xuesu Xiao and more
Potential Business Impact:
Robots learn new jobs faster using old skills.
Using Reinforcement Learning (RL) to learn new robotic tasks from scratch is often inefficient. Leveraging prior knowledge has the potential to significantly enhance learning efficiency, which, however, raises two critical challenges: how to determine the relevancy of existing knowledge and how to adaptively integrate them into learning a new task. In this paper, we propose Context-aware Adaptation for Robot Learning (CARoL), a novel framework to efficiently learn a similar but distinct new task from prior knowledge. CARoL incorporates context awareness by analyzing state transitions in system dynamics to identify similarities between the new task and prior knowledge. It then utilizes these identified similarities to prioritize and adapt specific knowledge pieces for the new task. Additionally, CARoL has a broad applicability spanning policy-based, value-based, and actor-critic RL algorithms. We validate the efficiency and generalizability of CARoL on both simulated robotic platforms and physical ground vehicles. The simulations include CarRacing and LunarLander environments, where CARoL demonstrates faster convergence and higher rewards when learning policies for new tasks. In real-world experiments, we show that CARoL enables a ground vehicle to quickly and efficiently adapt policies learned in simulation to smoothly traverse real-world off-road terrain.
Similar Papers
Context-Aware Model-Based Reinforcement Learning for Autonomous Racing
Machine Learning (CS)
Helps self-driving cars learn to race better.
A Comprehensive Review of Reinforcement Learning for Autonomous Driving in the CARLA Simulator
Robotics
Helps self-driving cars learn to drive better.
Knowledge capture, adaptation and composition (KCAC): A framework for cross-task curriculum learning in robotic manipulation
Robotics
Teaches robots to learn tasks faster and better.