Improving Drone Racing Performance Through Iterative Learning MPC
By: Haocheng Zhao , Niklas Schlüter , Lukas Brunke and more
Potential Business Impact:
Makes racing drones fly faster and avoid crashing.
Autonomous drone racing presents a challenging control problem, requiring real-time decision-making and robust handling of nonlinear system dynamics. While iterative learning model predictive control (LMPC) offers a promising framework for iterative performance improvement, its direct application to drone racing faces challenges like real-time compatibility or the trade-off between time-optimal and safe traversal. In this paper, we enhance LMPC with three key innovations: (1) an adaptive cost function that dynamically weights time-optimal tracking against centerline adherence, (2) a shifted local safe set to prevent excessive shortcutting and enable more robust iterative updates, and (3) a Cartesian-based formulation that accommodates safety constraints without the singularities or integration errors associated with Frenet-frame transformations. Results from extensive simulation and real-world experiments demonstrate that our improved algorithm can optimize initial trajectories generated by a wide range of controllers with varying levels of tuning for a maximum improvement in lap time by 60.85%. Even applied to the most aggressively tuned state-of-the-art model-based controller, MPCC++, on a real drone, a 6.05% improvement is still achieved. Overall, the proposed method pushes the drone toward faster traversal and avoids collisions in simulation and real-world experiments, making it a practical solution to improve the peak performance of drone racing.
Similar Papers
Improving Drone Racing Performance Through Iterative Learning MPC
Robotics
Makes racing drones fly faster and avoid crashing.
Improving Drone Racing Performance Through Iterative Learning MPC
Robotics
Makes drones race faster and avoid crashing.
MM-LMPC: Multi-Modal Learning Model Predictive Control via Bandit-Based Mode Selection
Systems and Control
Finds better ways to do tasks by trying all options.