Score: 0

Probabilistically safe and efficient model-based reinforcement learning

Published: April 1, 2025 | arXiv ID: 2504.00626v2

By: Filippo Airaldi, Bart De Schutter, Azita Dabiri

Potential Business Impact:

Makes robots learn to do dangerous jobs safely.

Business Areas:
Simulation Software

This paper proposes tackling safety-critical stochastic Reinforcement Learning (RL) tasks with a sample-based, model-based approach. At the core of the method lies a Model Predictive Control (MPC) scheme that acts as function approximation, providing a model-based predictive control policy. To ensure safety, a probabilistic Control Barrier Function (CBF) is integrated into the MPC controller. To approximate the effects of stochasticies in the optimal control formulation and to fulfil the probabilistic CBF condition, a sample-based approach with guarantees is employed. Furthermore, to counterbalance the additional computational burden due to sampling, a learnable terminal cost formulation is included in the MPC objective. An RL algorithm is deployed to learn both the terminal cost and the CBF constraint. Results from a numerical experiment on a constrained LTI problem corroborate the effectiveness of the proposed methodology in reducing computation time while preserving control performance and safety.

Country of Origin
🇳🇱 Netherlands

Page Count
8 pages

Category
Electrical Engineering and Systems Science:
Systems and Control