Multitask Reinforcement Learning for Quadcopter Attitude Stabilization and Tracking using Graph Policy
By: Yu Tang Liu , Afonso Vale , Aamir Ahmad and more
Potential Business Impact:
Drones fly better and learn faster.
Quadcopter attitude control involves two tasks: smooth attitude tracking and aggressive stabilization from arbitrary states. Although both can be formulated as tracking problems, their distinct state spaces and control strategies complicate a unified reward function. We propose a multitask deep reinforcement learning framework that leverages parallel simulation with IsaacGym and a Graph Convolutional Network (GCN) policy to address both tasks effectively. Our multitask Soft Actor-Critic (SAC) approach achieves faster, more reliable learning and higher sample efficiency than single-task methods. We validate its real-world applicability by deploying the learned policy - a compact two-layer network with 24 neurons per layer - on a Pixhawk flight controller, achieving 400 Hz control without extra computational resources. We provide our code at https://github.com/robot-perception-group/GraphMTSAC\_UAV/.
Similar Papers
Curriculum-based Sample Efficient Reinforcement Learning for Robust Stabilization of a Quadrotor
Robotics
Teaches drones to fly steady and land safely.
Graph Attention-based Decentralized Actor-Critic for Dual-Objective Control of Multi-UAV Swarms
Signal Processing
Drones cover more ground, last longer.
Deep Graph Reinforcement Learning for UAV-Enabled Multi-User Secure Communications
Signal Processing
Drones learn to send secret messages safely.