Online Bayesian Risk-Averse Reinforcement Learning
By: Yuhao Wang, Enlu Zhou
Potential Business Impact:
Teaches computers to learn safely from less data.
In this paper, we study the Bayesian risk-averse formulation in reinforcement learning (RL). To address the epistemic uncertainty due to a lack of data, we adopt the Bayesian Risk Markov Decision Process (BRMDP) to account for the parameter uncertainty of the unknown underlying model. We derive the asymptotic normality that characterizes the difference between the Bayesian risk value function and the original value function under the true unknown distribution. The results indicate that the Bayesian risk-averse approach tends to pessimistically underestimate the original value function. This discrepancy increases with stronger risk aversion and decreases as more data become available. We then utilize this adaptive property in the setting of online RL as well as online contextual multi-arm bandits (CMAB), a special case of online RL. We provide two procedures using posterior sampling for both the general RL problem and the CMAB problem. We establish a sub-linear regret bound, with the regret defined as the conventional regret for both the RL and CMAB settings. Additionally, we establish a sub-linear regret bound for the CMAB setting with the regret defined as the Bayesian risk regret. Finally, we conduct numerical experiments to demonstrate the effectiveness of the proposed algorithm in addressing epistemic uncertainty and verifying the theoretical properties.
Similar Papers
Asymptotically optimal reinforcement learning in Block Markov Decision Processes
Machine Learning (CS)
Teaches robots to learn faster in complex worlds.
Non-Stationary Restless Multi-Armed Bandits with Provable Guarantee
Machine Learning (CS)
Helps computers learn when things change.
Online Robust Multi-Agent Reinforcement Learning under Model Uncertainties
Machine Learning (CS)
Teaches robots to learn from mistakes safely.