Improved Robustness of Deep Reinforcement Learning for Control of Time-Varying Systems by Bounded Extremum Seeking
By: Shaifalee Saxena , Alan Williams , Rafael Fierro and more
Potential Business Impact:
Makes robots learn faster and stay steady.
In this paper, we study the use of robust model independent bounded extremum seeking (ES) feedback control to improve the robustness of deep reinforcement learning (DRL) controllers for a class of nonlinear time-varying systems. DRL has the potential to learn from large datasets to quickly control or optimize the outputs of many-parameter systems, but its performance degrades catastrophically when the system model changes rapidly over time. Bounded ES can handle time-varying systems with unknown control directions, but its convergence speed slows down as the number of tuned parameters increases and, like all local adaptive methods, it can get stuck in local minima. We demonstrate that together, DRL and bounded ES result in a hybrid controller whose performance exceeds the sum of its parts with DRL taking advantage of historical data to learn how to quickly control a many-parameter system to a desired setpoint while bounded ES ensures its robustness to time variations. We present a numerical study of a general time-varying system and a combined ES-DRL controller for automatic tuning of the Low Energy Beam Transport section at the Los Alamos Neutron Science Center linear particle accelerator.
Similar Papers
Data-Assimilated Model-Based Reinforcement Learning for Partially Observed Chaotic Flows
Systems and Control
Controls messy fluid flows using smart guessing.
Model-based controller assisted domain randomization in deep reinforcement learning: application to nonlinear powertrain control
Systems and Control
Teaches machines to control tricky systems better.
Harnessing Bounded-Support Evolution Strategies for Policy Refinement
Machine Learning (CS)
Makes robots learn difficult tasks more reliably.