Score: 0

Safety Assessment in Reinforcement Learning via Model Predictive Control

Published: October 23, 2025 | arXiv ID: 2510.20955v1

By: Jeff Pflueger, Michael Everett

Potential Business Impact:

Keeps robots from doing dangerous things safely.

Business Areas:
Industrial Automation Manufacturing, Science and Engineering

Model-free reinforcement learning approaches are promising for control but typically lack formal safety guarantees. Existing methods to shield or otherwise provide these guarantees often rely on detailed knowledge of the safety specifications. Instead, this work's insight is that many difficult-to-specify safety issues are best characterized by invariance. Accordingly, we propose to leverage reversibility as a method for preventing these safety issues throughout the training process. Our method uses model-predictive path integral control to check the safety of an action proposed by a learned policy throughout training. A key advantage of this approach is that it only requires the ability to query the black-box dynamics, not explicit knowledge of the dynamics or safety constraints. Experimental results demonstrate that the proposed algorithm successfully aborts before all unsafe actions, while still achieving comparable training progress to a baseline PPO approach that is allowed to violate safety.

Country of Origin
🇺🇸 United States

Page Count
7 pages

Category
Computer Science:
Machine Learning (CS)