Tractable Representations for Convergent Approximation of Distributional HJB Equations
By: Julie Alhosh, Harley Wiltzer, David Meger
Potential Business Impact:
Helps computers learn better in real-time.
In reinforcement learning (RL), the long-term behavior of decision-making policies is evaluated based on their average returns. Distributional RL has emerged, presenting techniques for learning return distributions, which provide additional statistics for evaluating policies, incorporating risk-sensitive considerations. When the passage of time cannot naturally be divided into discrete time increments, researchers have studied the continuous-time RL (CTRL) problem, where agent states and decisions evolve continuously. In this setting, the Hamilton-Jacobi-Bellman (HJB) equation is well established as the characterization of the expected return, and many solution methods exist. However, the study of distributional RL in the continuous-time setting is in its infancy. Recent work has established a distributional HJB (DHJB) equation, providing the first characterization of return distributions in CTRL. These equations and their solutions are intractable to solve and represent exactly, requiring novel approximation techniques. This work takes strides towards this end, establishing conditions on the method of parameterizing return distributions under which the DHJB equation can be approximately solved. Particularly, we show that under a certain topological property of the mapping between statistics learned by a distributional RL algorithm and corresponding distributions, approximation of these statistics leads to close approximations of the solution of the DHJB equation. Concretely, we demonstrate that the quantile representation common in distributional RL satisfies this topological property, certifying an efficient approximation algorithm for continuous-time distributional RL.
Similar Papers
Is Bellman Equation Enough for Learning Control?
Machine Learning (CS)
Makes smart robots learn the right way.
Bridging Continuous-time LQR and Reinforcement Learning via Gradient Flow of the Bellman Error
Systems and Control
Makes robots learn faster and better.
Continuous-Time Value Iteration for Multi-Agent Reinforcement Learning
Machine Learning (CS)
Lets many robots learn to work together.