Provably Efficient Sample Complexity for Robust CMDP
By: Sourav Ganguly, Arnob Ghosh
Potential Business Impact:
Teaches robots to be safe and smart.
We study the problem of learning policies that maximize cumulative reward while satisfying safety constraints, even when the real environment differs from a simulator or nominal model. We focus on robust constrained Markov decision processes (RCMDPs), where the agent must maximize reward while ensuring cumulative utility exceeds a threshold under the worst-case dynamics within an uncertainty set. While recent works have established finite-time iteration complexity guarantees for RCMDPs using policy optimization, their sample complexity guarantees remain largely unexplored. In this paper, we first show that Markovian policies may fail to be optimal even under rectangular uncertainty sets unlike the {\em unconstrained} robust MDP. To address this, we introduce an augmented state space that incorporates the remaining utility budget into the state representation. Building on this formulation, we propose a novel Robust constrained Value iteration (RCVI) algorithm with a sample complexity of $\mathcal{\tilde{O}}(|S||A|H^5/ε^2)$ achieving at most $ε$ violation using a generative model where $|S|$ and $|A|$ denote the sizes of the state and action spaces, respectively, and $H$ is the episode length. To the best of our knowledge, this is the {\em first sample complexity guarantee} for RCMDP. Empirical results further validate the effectiveness of our approach.
Similar Papers
Efficient Policy Optimization in Robust Constrained MDPs with Iteration Complexity Guarantees
Machine Learning (CS)
Teaches robots to make safe choices always.
Provably Efficient RL under Episode-Wise Safety in Constrained MDPs with Linear Function Approximation
Machine Learning (CS)
Teaches robots to learn safely and fast.
Near-Optimal Sample Complexity Bounds for Constrained Average-Reward MDPs
Machine Learning (CS)
Teaches computers to make smart choices with rules.