Non-Equilibrium MAV-Capture-MAV via Time-Optimal Planning and Reinforcement Learning
By: Canlun Zheng , Zhanyu Guo , Zikang Yin and more
Potential Business Impact:
Drones catch fast, tricky flying targets.
The capture of flying MAVs (micro aerial vehicles) has garnered increasing research attention due to its intriguing challenges and promising applications. Despite recent advancements, a key limitation of existing work is that capture strategies are often relatively simple and constrained by platform performance. This paper addresses control strategies capable of capturing high-maneuverability targets. The unique challenge of achieving target capture under unstable conditions distinguishes this task from traditional pursuit-evasion and guidance problems. In this study, we transition from larger MAV platforms to a specially designed, compact capture MAV equipped with a custom launching device while maintaining high maneuverability. We explore both time-optimal planning (TOP) and reinforcement learning (RL) methods. Simulations demonstrate that TOP offers highly maneuverable and shorter trajectories, while RL excels in real-time adaptability and stability. Moreover, the RL method has been tested in real-world scenarios, successfully achieving target capture even in unstable states.
Similar Papers
Sim-to-Real Transfer in Reinforcement Learning for Maneuver Control of a Variable-Pitch MAV
Robotics
Drones learn to do flips and tricky moves.
Simultaneous learning of state-to-state minimum-time planning and control
Robotics
Drones fly themselves to any spot fast.
Enhancing Aerial Combat Tactics through Hierarchical Multi-Agent Reinforcement Learning
Artificial Intelligence
Teaches fighter jets how to win simulated dogfights.