Planning and Learning in Average Risk-aware MDPs
By: Weikai Wang, Erick Delage
Potential Business Impact:
Helps smart programs make safer, smarter choices.
For continuing tasks, average cost Markov decision processes have well-documented value and can be solved using efficient algorithms. However, it explicitly assumes that the agent is risk-neutral. In this work, we extend risk-neutral algorithms to accommodate the more general class of dynamic risk measures. Specifically, we propose a relative value iteration (RVI) algorithm for planning and design two model-free Q-learning algorithms, namely a generic algorithm based on the multi-level Monte Carlo method, and an off-policy algorithm dedicated to utility-base shortfall risk measures. Both the RVI and MLMC-based Q-learning algorithms are proven to converge to optimality. Numerical experiments validate our analysis, confirms empirically the convergence of the off-policy algorithm, and demonstrate that our approach enables the identification of policies that are finely tuned to the intricate risk-awareness of the agent that they serve.
Similar Papers
Online Bayesian Risk-Averse Reinforcement Learning
Machine Learning (CS)
Teaches computers to learn safely from less data.
Risk-sensitive Reinforcement Learning Based on Convex Scoring Functions
Mathematical Finance
Teaches computers to trade money safely and smartly.
Provably Sample-Efficient Robust Reinforcement Learning with Average Reward
Machine Learning (CS)
Helps computers learn better with less data.