Global Convergence of Policy Gradient for Entropy Regularized Linear-Quadratic Control with multiplicative noise
By: Gabriel Diaz, Lucky Li, Wenhao Zhang
Potential Business Impact:
Teaches computers to learn and make good choices.
Reinforcement Learning (RL) has emerged as a powerful framework for sequential decision-making in dynamic environments, particularly when system parameters are unknown. This paper investigates RL-based control for entropy-regularized Linear Quadratic control (LQC) problems with multiplicative noises over an infinite time horizon. First, we adapt the Regularized Policy Gradient (RPG) algorithm to stochastic optimal control settings, proving that despite the non-convexity of the problem, RPG converges globally under conditions of gradient domination and near-smoothness. Second, based on zero-order optimization approach, we introduce a novel model free RL algorithm: Sample-Based Regularized Policy Gradient (SB-RPG). SB-RPG operates without knowledge of system parameters yet still retains strong theoretical guarantees of global convergence. Our model leverages entropy regularization to accelerate convergence and address the exploration versus exploitation trade-off inherent in RL. Numerical simulations validate the theoretical results and demonstrate the efficacy of SB-RPG in unknown-parameters environments.
Similar Papers
Toward Optimal Statistical Inference in Noisy Linear Quadratic Reinforcement Learning over a Finite Horizon
Statistics Theory
Shows how sure a robot is about its choices.
Residual Policy Gradient: A Reward View of KL-regularized Objective
Machine Learning (CS)
Lets robots learn new tricks without forgetting old ones.
Reparameterization Proximal Policy Optimization
Machine Learning (CS)
Teaches robots to learn faster and more reliably.