Score: 1

Accelerating Model-Based Reinforcement Learning using Non-Linear Trajectory Optimization

Published: June 3, 2025 | arXiv ID: 2506.02767v1

By: Marco Calì , Giulio Giacomuzzo , Ruggero Carli and more

Potential Business Impact:

Teaches robots new skills much faster.

Business Areas:
A/B Testing Data and Analytics

This paper addresses the slow policy optimization convergence of Monte Carlo Probabilistic Inference for Learning Control (MC-PILCO), a state-of-the-art model-based reinforcement learning (MBRL) algorithm, by integrating it with iterative Linear Quadratic Regulator (iLQR), a fast trajectory optimization method suitable for nonlinear systems. The proposed method, Exploration-Boosted MC-PILCO (EB-MC-PILCO), leverages iLQR to generate informative, exploratory trajectories and initialize the policy, significantly reducing the number of required optimization steps. Experiments on the cart-pole task demonstrate that EB-MC-PILCO accelerates convergence compared to standard MC-PILCO, achieving up to $\bm{45.9\%}$ reduction in execution time when both methods solve the task in four trials. EB-MC-PILCO also maintains a $\bm{100\%}$ success rate across trials while solving the task faster, even in cases where MC-PILCO converges in fewer iterations.

Country of Origin
🇮🇹 Italy

Page Count
6 pages

Category
Computer Science:
Machine Learning (CS)