Predictive reinforcement learning based adaptive PID controller
By: Chaoqun Ma, Zhiyong Zhang
Potential Business Impact:
Makes wobbly machines move smoothly and accurately.
Purpose: This study aims to address the challenges of controlling unstable and nonlinear systems by proposing an adaptive PID controller based on predictive reinforcement learning (PRL-PID), where the PRL-PID combines the advantages of both data-driven and model-driven approaches. Design/methodology/approach: A predictive reinforcement learning framework is introduced, incorporating action smooth strategy to suppress overshoot and oscillations, and a hierarchical reward function to support training. Findings: Experimental results show that the PRL-PID controller achieves superior stability and tracking accuracy in nonlinear, unstable, and strongly coupled systems, consistently outperforming existing RL-tuned PID methods while maintaining excellent robustness and adaptability across diverse operating conditions. Originality/Value: By adopting predictive learning, the proposed PRL-PID integrates system model priors into data-driven control, enhancing both the control framework's training efficiency and the controller's stability. As a result, PRL-PID provides a balanced blend of model-based and data-driven approaches, delivering robust, high-performance control.
Similar Papers
Adaptive PID Control for Robotic Systems via Hierarchical Meta-Learning and Reinforcement Learning with Physics-Based Data Augmentation
Robotics
Teaches robots to learn faster and better.
Rich State Observations Empower Reinforcement Learning to Surpass PID: A Drone Ball Balancing Study
Robotics
Drone balances ball on beam using smart learning.
Predictive Lagrangian Optimization for Constrained Reinforcement Learning
Machine Learning (CS)
Teaches robots to learn tasks with fewer mistakes.