Safety Reinforced Model Predictive Control (SRMPC): Improving MPC with Reinforcement Learning for Motion Planning in Autonomous Driving
By: Johannes Fischer , Marlon Steiner , Ömer Sahin Tas and more
Potential Business Impact:
Helps self-driving cars find better, safer routes.
Model predictive control (MPC) is widely used for motion planning, particularly in autonomous driving. Real-time capability of the planner requires utilizing convex approximation of optimal control problems (OCPs) for the planner. However, such approximations confine the solution to a subspace, which might not contain the global optimum. To address this, we propose using safe reinforcement learning (SRL) to obtain a new and safe reference trajectory within MPC. By employing a learning-based approach, the MPC can explore solutions beyond the close neighborhood of the previous one, potentially finding global optima. We incorporate constrained reinforcement learning (CRL) to ensure safety in automated driving, using a handcrafted energy function-based safety index as the constraint objective to model safe and unsafe regions. Our approach utilizes a state-dependent Lagrangian multiplier, learned concurrently with the safe policy, to solve the CRL problem. Through experimentation in a highway scenario, we demonstrate the superiority of our approach over both MPC and SRL in terms of safety and performance measures.
Similar Papers
From Shadow to Light: Toward Safe and Efficient Policy Learning Across MPC, DeePC, RL, and LLM Agents
Robotics
Makes robots move faster and safer.
Residual MPC: Blending Reinforcement Learning with GPU-Parallelized Model Predictive Control
Robotics
Robots walk better by combining two smart control methods.
A Step-by-step Guide on Nonlinear Model Predictive Control for Safe Mobile Robot Navigation
Robotics
Robot safely moves around things, even if they move.