Action-Constrained Imitation Learning
By: Chia-Han Yeh , Tse-Sheng Nan , Risto Vuorio and more
Potential Business Impact:
Teaches robots to copy experts safely.
Policy learning under action constraints plays a central role in ensuring safe behaviors in various robot control and resource allocation applications. In this paper, we study a new problem setting termed Action-Constrained Imitation Learning (ACIL), where an action-constrained imitator aims to learn from a demonstrative expert with larger action space. The fundamental challenge of ACIL lies in the unavoidable mismatch of occupancy measure between the expert and the imitator caused by the action constraints. We tackle this mismatch through \textit{trajectory alignment} and propose DTWIL, which replaces the original expert demonstrations with a surrogate dataset that follows similar state trajectories while adhering to the action constraints. Specifically, we recast trajectory alignment as a planning problem and solve it via Model Predictive Control, which aligns the surrogate trajectories with the expert trajectories based on the Dynamic Time Warping (DTW) distance. Through extensive experiments, we demonstrate that learning from the dataset generated by DTWIL significantly enhances performance across multiple robot control tasks and outperforms various benchmark imitation learning algorithms in terms of sample efficiency. Our code is publicly available at https://github.com/NYCU-RL-Bandits-Lab/ACRL-Baselines.
Similar Papers
A Simple Approach to Constraint-Aware Imitation Learning with Application to Autonomous Racing
Machine Learning (CS)
Teaches robots to drive safely and fast.
When a Robot is More Capable than a Human: Learning from Constrained Demonstrators
Robotics
Robots learn faster by exploring, not just copying.
Constraint-Aware Reinforcement Learning via Adaptive Action Scaling
Robotics
Teaches robots to learn safely without breaking things.