Multi-fidelity Reinforcement Learning Control for Complex Dynamical Systems
By: Luning Sun , Xin-Yang Liu , Siyan Zhao and more
Potential Business Impact:
Teaches computers to control tricky systems faster.
Controlling instabilities in complex dynamical systems is challenging in scientific and engineering applications. Deep reinforcement learning (DRL) has seen promising results for applications in different scientific applications. The many-query nature of control tasks requires multiple interactions with real environments of the underlying physics. However, it is usually sparse to collect from the experiments or expensive to simulate for complex dynamics. Alternatively, controlling surrogate modeling could mitigate the computational cost issue. However, a fast and accurate learning-based model by offline training makes it very hard to get accurate pointwise dynamics when the dynamics are chaotic. To bridge this gap, the current work proposes a multi-fidelity reinforcement learning (MFRL) framework that leverages differentiable hybrid models for control tasks, where a physics-based hybrid model is corrected by limited high-fidelity data. We also proposed a spectrum-based reward function for RL learning. The effect of the proposed framework is demonstrated on two complex dynamics in physics. The statistics of the MFRL control result match that computed from many-query evaluations of the high-fidelity environments and outperform other SOTA baselines.
Similar Papers
Adaptive Multi-Fidelity Reinforcement Learning for Variance Reduction in Engineering Design Optimization
Machine Learning (CS)
Teaches robots better by using different skill levels.
Model-based controller assisted domain randomization in deep reinforcement learning: application to nonlinear powertrain control
Systems and Control
Teaches machines to control tricky systems better.
Data-assimilated model-informed reinforcement learning
Systems and Control
Controls wild, unpredictable things with less information.