Score: 0

On Architectures for Combining Reinforcement Learning and Model Predictive Control with Runtime Improvements

Published: October 2, 2025 | arXiv ID: 2510.03354v1

By: Xiaolong Jia, Nikhil Bajaj

Potential Business Impact:

Makes robots learn faster and control better.

Business Areas:
Robotics Hardware, Science and Engineering, Software

Model Predictive Control (MPC) faces computational demands and performance degradation from model inaccuracies. We propose two architectures combining Neural Network-approximated MPC (NNMPC) with Reinforcement Learning (RL). The first, Warm Start RL, initializes the RL actor with pre-trained NNMPC weights. The second, RLMPC, uses RL to generate corrective residuals for NNMPC outputs. We introduce a downsampling method reducing NNMPC input dimensions while maintaining performance. Evaluated on a rotary inverted pendulum, both architectures demonstrate runtime reductions exceeding 99% compared to traditional MPC while improving tracking performance under model uncertainties, with RL+MPC achieving 11-40% cost reduction depending on reference amplitude.

Page Count
8 pages

Category
Electrical Engineering and Systems Science:
Systems and Control