Accelerating Model-Based Reinforcement Learning using Non-Linear Trajectory Optimization
By: Marco Calì , Giulio Giacomuzzo , Ruggero Carli and more
Potential Business Impact:
Teaches robots new skills much faster.
This paper addresses the slow policy optimization convergence of Monte Carlo Probabilistic Inference for Learning Control (MC-PILCO), a state-of-the-art model-based reinforcement learning (MBRL) algorithm, by integrating it with iterative Linear Quadratic Regulator (iLQR), a fast trajectory optimization method suitable for nonlinear systems. The proposed method, Exploration-Boosted MC-PILCO (EB-MC-PILCO), leverages iLQR to generate informative, exploratory trajectories and initialize the policy, significantly reducing the number of required optimization steps. Experiments on the cart-pole task demonstrate that EB-MC-PILCO accelerates convergence compared to standard MC-PILCO, achieving up to $\bm{45.9\%}$ reduction in execution time when both methods solve the task in four trials. EB-MC-PILCO also maintains a $\bm{100\%}$ success rate across trials while solving the task faster, even in cases where MC-PILCO converges in fewer iterations.
Similar Papers
Learning global control of underactuated systems with Model-Based Reinforcement Learning
Robotics
Teaches robots to learn new tasks faster.
A KL-regularization framework for learning to plan with adaptive priors
Machine Learning (CS)
Teaches robots to learn faster and better.
Towards Causal Model-Based Policy Optimization
Machine Learning (CS)
Teaches computers to make better choices when things change.