Score: 0

Optimistic Reinforcement Learning with Quantile Objectives

Published: November 12, 2025 | arXiv ID: 2511.09652v1

By: Mohammad Alipour-Vaezi , Huaiyang Zhong , Kwok-Leung Tsui and more

Potential Business Impact:

Teaches computers to make safer, smarter choices.

Business Areas:
Quantum Computing Science and Engineering

Reinforcement Learning (RL) has achieved tremendous success in recent years. However, the classical foundations of RL do not account for the risk sensitivity of the objective function, which is critical in various fields, including healthcare and finance. A popular approach to incorporate risk sensitivity is to optimize a specific quantile of the cumulative reward distribution. In this paper, we develop UCB-QRL, an optimistic learning algorithm for the $τ$-quantile objective in finite-horizon Markov decision processes (MDPs). UCB-QRL is an iterative algorithm in which, at each iteration, we first estimate the underlying transition probability and then optimize the quantile value function over a confidence ball around this estimate. We show that UCB-QRL yields a high-probability regret bound $\mathcal O\left((2/κ)^{H+1}H\sqrt{SATH\log(2SATH/δ)}\right)$ in the episodic setting with $S$ states, $A$ actions, $T$ episodes, and $H$ horizons. Here, $κ>0$ is a problem-dependent constant that captures the sensitivity of the underlying MDP's quantile value.

Country of Origin
🇺🇸 United States

Page Count
24 pages

Category
Computer Science:
Machine Learning (CS)