Dyna-Style Reinforcement Learning Modeling and Control of Non-linear Dynamics
By: Karim Abdelsalam, Zeyad Gamal, Ayman El-Badawy
Controlling systems with complex, nonlinear dynamics poses a significant challenge, particularly in achieving efficient and robust control. In this paper, we propose a Dyna-Style Reinforcement Learning control framework that integrates Sparse Identification of Nonlinear Dynamics (SINDy) with Twin Delayed Deep Deterministic Policy Gradient (TD3) reinforcement learning. SINDy is used to identify a data-driven model of the system, capturing its key dynamics without requiring an explicit physical model. This identified model is used to generate synthetic rollouts that are periodically injected into the reinforcement learning replay buffer during training on the real environment, enabling efficient policy learning with limited data available. By leveraging this hybrid approach, we mitigate the sample inefficiency of traditional model-free reinforcement learning methods while ensuring accurate control of nonlinear systems. To demonstrate the effectiveness of this framework, we apply it to a bi-rotor system as a case study, evaluating its performance in stabilization and trajectory tracking. The results show that our SINDy-TD3 approach achieves superior accuracy and robustness compared to direct reinforcement learning techniques, highlighting the potential of combining data-driven modeling with reinforcement learning for complex dynamical systems.
Similar Papers
Sparse identification of nonlinear dynamics with high accuracy and reliability under noisy conditions for applications to industrial systems
Systems and Control
Predicts complex engine behavior accurately, even with noise.
Sparse Identification of Nonlinear Dynamics with Conformal Prediction
Machine Learning (CS)
Makes computer models more sure about predictions.
Learning from Less: SINDy Surrogates in RL
Machine Learning (CS)
Teaches robots faster with less practice.