Dreaming Falcon: Physics-Informed Model-Based Reinforcement Learning for Quadcopters
By: Eashan Vytla , Bhavanishankar Kalavakolanu , Andrew Perrault and more
Potential Business Impact:
Teaches drones to fly safely in wind.
Current control algorithms for aerial robots struggle with robustness in dynamic environments and adverse conditions. Model-based reinforcement learning (RL) has shown strong potential in handling these challenges while remaining sample-efficient. Additionally, Dreamer has demonstrated that online model-based RL can be achieved using a recurrent world model trained on replay buffer data. However, applying Dreamer to aerial systems has been quite challenging due to its sample inefficiency and poor generalization of dynamics models. Our work explores a physics-informed approach to world model learning and improves policy performance. The world model treats the quadcopter as a free-body system and predicts the net forces and moments acting on it, which are then passed through a 6-DOF Runge-Kutta integrator (RK4) to predict future state rollouts. In this paper, we compare this physics-informed method to a standard RNN-based world model. Although both models perform well on the training data, we observed that they fail to generalize to new trajectories, leading to rapid divergence in state rollouts, preventing policy convergence.
Similar Papers
Fault Tolerant Control of a Quadcopter using Reinforcement Learning
Robotics
Keeps drones flying even if a propeller breaks.
SkyDreamer: Interpretable End-to-End Vision-Based Drone Racing with Model-Based Reinforcement Learning
Robotics
Drones fly themselves super fast, seeing with cameras.
World Models for Autonomous Navigation of Terrestrial Robots from LIDAR Observations
Robotics
Helps robots learn to drive better using less data.