Rich State Observations Empower Reinforcement Learning to Surpass PID: A Drone Ball Balancing Study
By: Mingjiang Liu, Hailong Huang
Potential Business Impact:
Drone balances ball on beam using smart learning.
This paper addresses a drone ball-balancing task, in which a drone stabilizes a ball atop a movable beam through cable-based interaction. We propose a hierarchical control framework that decouples high-level balancing policy from low-level drone control, and train a reinforcement learning (RL) policy to handle the high-level decision-making. Simulation results show that the RL policy achieves superior performance compared to carefully tuned PID controllers within the same hierarchical structure. Through systematic comparative analysis, we demonstrate that RL's advantage stems not from improved parameter tuning or inherent nonlinear mapping capabilities, but from its ability to effectively utilize richer state observations. These findings underscore the critical role of comprehensive state representation in learning-based systems and suggest that enhanced sensing could be instrumental in improving controller performance.
Similar Papers
Predictive reinforcement learning based adaptive PID controller
Systems and Control
Makes wobbly machines move smoothly and accurately.
Dynamic Legged Ball Manipulation on Rugged Terrains with Hierarchical Reinforcement Learning
Robotics
Robot dogs learn to dribble a ball over rough ground.
Adaptive PID Control for Robotic Systems via Hierarchical Meta-Learning and Reinforcement Learning with Physics-Based Data Augmentation
Robotics
Teaches robots to learn faster and better.