Bayesian learning of the optimal action-value function in a Markov decision process
By: Jiaqi Guo, Chon Wai Ho, Sumeetpal S. Singh
Potential Business Impact:
Helps robots learn best actions with less guessing.
The Markov Decision Process (MDP) is a popular framework for sequential decision-making problems, and uncertainty quantification is an essential component of it to learn optimal decision-making strategies. In particular, a Bayesian framework is used to maintain beliefs about the optimal decisions and the unknown ingredients of the model, which are also to be learned from the data, such as the rewards and state dynamics. However, many existing Bayesian approaches for learning the optimal decision-making strategy are based on unrealistic modelling assumptions and utilise approximate inference techniques. This raises doubts whether the benefits of Bayesian uncertainty quantification are fully realised or can be relied upon. We focus on infinite-horizon and undiscounted MDPs, with finite state and action spaces, and a terminal state. We provide a full Bayesian framework, from modelling to inference to decision-making. For modelling, we introduce a likelihood function with minimal assumptions for learning the optimal action-value function based on Bellman's optimality equations, analyse its properties, and clarify connections to existing works. For deterministic rewards, the likelihood is degenerate and we introduce artificial observation noise to relax it, in a controlled manner, to facilitate more efficient Monte Carlo-based inference. For inference, we propose an adaptive sequential Monte Carlo algorithm to both sample from and adjust the sequence of relaxed posterior distributions. For decision-making, we choose actions using samples from the posterior distribution over the optimal strategies. While commonly done, we provide new insight that clearly shows that it is a generalisation of Thompson sampling from multi-arm bandit problems. Finally, we evaluate our framework on the Deep Sea benchmark problem and demonstrate the exploration benefits of posterior sampling in MDPs.
Similar Papers
Optimal Estimation and Uncertainty Quantification for Stochastic Inverse Problems via Variational Bayesian Methods
Numerical Analysis
Finds hidden answers in messy data.
A Minimax-MDP Framework with Future-imposed Conditions for Learning-augmented Problems
Machine Learning (CS)
Helps make better choices with improving guesses.
Position: There Is No Free Bayesian Uncertainty Quantification
Machine Learning (Stat)
Shows how computers guess better and know when they're wrong.