A Simple Approach to Constraint-Aware Imitation Learning with Application to Autonomous Racing
By: Shengfan Cao, Eunhyek Joa, Francesco Borrelli
Potential Business Impact:
Teaches robots to drive safely and fast.
Guaranteeing constraint satisfaction is challenging in imitation learning (IL), particularly in tasks that require operating near a system's handling limits. Traditional IL methods, such as Behavior Cloning (BC), often struggle to enforce constraints, leading to suboptimal performance in high-precision tasks. In this paper, we present a simple approach to incorporating safety into the IL objective. Through simulations, we empirically validate our approach on an autonomous racing task with both full-state and image feedback, demonstrating improved constraint satisfaction and greater consistency in task performance compared to BC.
Similar Papers
On learning racing policies with reinforcement learning
Robotics
Teaches race cars to drive themselves faster.
Action-Constrained Imitation Learning
Robotics
Teaches robots to copy experts safely.
Exposing the Copycat Problem of Imitation-based Planner: A Novel Closed-Loop Simulator, Causal Benchmark and Joint IL-RL Baseline
CV and Pattern Recognition
Teaches self-driving cars to learn better.