Enhance the Safety in Reinforcement Learning by ADRC Lagrangian Methods
By: Mingxu Zhang , Huicheng Zhang , Jiaming Ji and more
Potential Business Impact:
Keeps robots safe while they learn new tasks.
Safe reinforcement learning (Safe RL) seeks to maximize rewards while satisfying safety constraints, typically addressed through Lagrangian-based methods. However, existing approaches, including PID and classical Lagrangian methods, suffer from oscillations and frequent safety violations due to parameter sensitivity and inherent phase lag. To address these limitations, we propose ADRC-Lagrangian methods that leverage Active Disturbance Rejection Control (ADRC) for enhanced robustness and reduced oscillations. Our unified framework encompasses classical and PID Lagrangian methods as special cases while significantly improving safety performance. Extensive experiments demonstrate that our approach reduces safety violations by up to 74%, constraint violation magnitudes by 89%, and average costs by 67\%, establishing superior effectiveness for Safe RL in complex environments.
Similar Papers
An Empirical Study of Lagrangian Methods in Safe Reinforcement Learning
Machine Learning (CS)
Makes robots learn safely and perform better.
Constraint-Aware Reinforcement Learning via Adaptive Action Scaling
Robotics
Teaches robots to learn safely without breaking things.
MPC-Guided Safe Reinforcement Learning and Lipschitz-Based Filtering for Structured Nonlinear Systems
Robotics
Makes robots and cars safer and smarter.