A Survey of Safe Reinforcement Learning and Constrained MDPs: A Technical Survey on Single-Agent and Multi-Agent Safety
By: Ankita Kushwaha , Kiran Ravish , Preeti Lamba and more
Potential Business Impact:
Teaches robots to learn safely without mistakes.
Safe Reinforcement Learning (SafeRL) is the subfield of reinforcement learning that explicitly deals with safety constraints during the learning and deployment of agents. This survey provides a mathematically rigorous overview of SafeRL formulations based on Constrained Markov Decision Processes (CMDPs) and extensions to Multi-Agent Safe RL (SafeMARL). We review theoretical foundations of CMDPs, covering definitions, constrained optimization techniques, and fundamental theorems. We then summarize state-of-the-art algorithms in SafeRL for single agents, including policy gradient methods with safety guarantees and safe exploration strategies, as well as recent advances in SafeMARL for cooperative and competitive settings. Additionally, we propose five open research problems to advance the field, with three focusing on SafeMARL. Each problem is described with motivation, key challenges, and related prior work. This survey is intended as a technical guide for researchers interested in SafeRL and SafeMARL, highlighting key concepts, methods, and open future research directions.
Similar Papers
Scalable Safe Multi-Agent Reinforcement Learning for Multi-Agent System
Multiagent Systems
Helps many robots work together safely and efficiently.
Probabilistic Shielding for Safe Reinforcement Learning
Machine Learning (Stat)
Keeps robots safe while they learn new tasks.
Provably Optimal Reinforcement Learning under Safety Filtering
Machine Learning (CS)
Makes robots learn safely without mistakes.