Efficient Solution and Learning of Robust Factored MDPs
By: Yannik Schnitzer, Alessandro Abate, David Parker
Potential Business Impact:
Makes AI learn safe actions with fewer tries
Robust Markov decision processes (r-MDPs) extend MDPs by explicitly modelling epistemic uncertainty about transition dynamics. Learning r-MDPs from interactions with an unknown environment enables the synthesis of robust policies with provable (PAC) guarantees on performance, but this can require a large number of sample interactions. We propose novel methods for solving and learning r-MDPs based on factored state-space representations that leverage the independence between model uncertainty across system components. Although policy synthesis for factored r-MDPs leads to hard, non-convex optimisation problems, we show how to reformulate these into tractable linear programs. Building on these, we also propose methods to learn factored model representations directly. Our experimental results show that exploiting factored structure can yield dimensional gains in sample efficiency, producing more effective robust policies with tighter performance guarantees than state-of-the-art methods.
Similar Papers
Model-Based Reinforcement Learning Under Confounding
Machine Learning (CS)
Lets computers learn from past mistakes without seeing everything.
Provably Efficient Sample Complexity for Robust CMDP
Machine Learning (CS)
Teaches robots to be safe and smart.
Policy Regularized Distributionally Robust Markov Decision Processes with Linear Function Approximation
Machine Learning (CS)
Teaches robots to learn safely in new places.