Distributionally Robust Deep Q-Learning
By: Chung I Lu, Julian Sester, Aijia Zhang
Potential Business Impact:
Teaches computers to make smart money choices.
We propose a novel distributionally robust $Q$-learning algorithm for the non-tabular case accounting for continuous state spaces where the state transition of the underlying Markov decision process is subject to model uncertainty. The uncertainty is taken into account by considering the worst-case transition from a ball around a reference probability measure. To determine the optimal policy under the worst-case state transition, we solve the associated non-linear Bellman equation by dualising and regularising the Bellman operator with the Sinkhorn distance, which is then parameterized with deep neural networks. This approach allows us to modify the Deep Q-Network algorithm to optimise for the worst case state transition. We illustrate the tractability and effectiveness of our approach through several applications, including a portfolio optimisation task based on S\&{P}~500 data.
Similar Papers
Semiparametric Double Reinforcement Learning with Applications to Long-Term Causal Inference
Machine Learning (Stat)
Helps computers learn better from past experiences.
Provably Near-Optimal Distributionally Robust Reinforcement Learning in Online Settings
Machine Learning (CS)
Teaches robots to work safely in new places.
Deep Transfer $Q$-Learning for Offline Non-Stationary Reinforcement Learning
Machine Learning (Stat)
Teaches computers to make better choices with less data.