Score: 0

A Differential Perspective on Distributional Reinforcement Learning

Published: June 3, 2025 | arXiv ID: 2506.03333v1

By: Juan Sebastian Rojas, Chi-Guhn Lee

Potential Business Impact:

Teaches robots to get the most rewards over time.

Business Areas:
A/B Testing Data and Analytics

To date, distributional reinforcement learning (distributional RL) methods have exclusively focused on the discounted setting, where an agent aims to optimize a potentially-discounted sum of rewards over time. In this work, we extend distributional RL to the average-reward setting, where an agent aims to optimize the reward received per time-step. In particular, we utilize a quantile-based approach to develop the first set of algorithms that can successfully learn and/or optimize the long-run per-step reward distribution, as well as the differential return distribution of an average-reward MDP. We derive proven-convergent tabular algorithms for both prediction and control, as well as a broader family of algorithms that have appealing scaling properties. Empirically, we find that these algorithms consistently yield competitive performance when compared to their non-distributional equivalents, while also capturing rich information about the long-run reward and return distributions.

Country of Origin
🇨🇦 Canada

Page Count
39 pages

Category
Computer Science:
Machine Learning (CS)