Re4MPC: Reactive Nonlinear MPC for Multi-model Motion Planning via Deep Reinforcement Learning
By: Neşet Ünver Akmandor , Sarvesh Prajapati , Mark Zolotas and more
Potential Business Impact:
Robots move smarter and faster with less thinking.
Traditional motion planning methods for robots with many degrees-of-freedom, such as mobile manipulators, are often computationally prohibitive for real-world settings. In this paper, we propose a novel multi-model motion planning pipeline, termed Re4MPC, which computes trajectories using Nonlinear Model Predictive Control (NMPC). Re4MPC generates trajectories in a computationally efficient manner by reactively selecting the model, cost, and constraints of the NMPC problem depending on the complexity of the task and robot state. The policy for this reactive decision-making is learned via a Deep Reinforcement Learning (DRL) framework. We introduce a mathematical formulation to integrate NMPC into this DRL framework. To validate our methodology and design choices, we evaluate DRL training and test outcomes in a physics-based simulation involving a mobile manipulator. Experimental results demonstrate that Re4MPC is more computationally efficient and achieves higher success rates in reaching end-effector goals than the NMPC baseline, which computes whole-body trajectories without our learning mechanism.
Similar Papers
Multi-Agent Feedback Motion Planning using Probably Approximately Correct Nonlinear Model Predictive Control
Robotics
Robots work together safely, even with problems.
Flexible Locomotion Learning with Diffusion Model Predictive Control
Robotics
Robots learn to walk and change how they move.
A nonlinear real time capable motion cueing algorithm based on deep reinforcement learning
Systems and Control
Makes flight simulators feel more real.