Order Optimal Regret Bounds for Sharpe Ratio Optimization in the Bandit Setting
By: Mohammad Taha Shah, Sabrina Khurshid, Gourab Ghatak
Potential Business Impact:
Helps computers make smarter choices with less risk.
In this paper, we investigate the problem of sequential decision-making for Sharpe ratio (SR) maximization in a stochastic bandit setting. We focus on the Thompson Sampling (TS) algorithm, a Bayesian approach celebrated for its empirical performance and exploration efficiency, under the assumption of Gaussian rewards with unknown parameters. Unlike conventional bandit objectives focusing on maximizing cumulative reward, Sharpe ratio optimization instead introduces an inherent tradeoff between achieving high returns and controlling risk, demanding careful exploration of both mean and variance. Our theoretical contributions include a novel regret decomposition specifically designed for the Sharpe ratio, highlighting the role of information acquisition about the reward distribution in driving learning efficiency. Then, we establish fundamental performance limits for the proposed algorithm \texttt{SRTS} in terms of an upper bound on regret. We also derive the matching lower bound and show the order-optimality. Our results show that Thompson Sampling achieves logarithmic regret over time, with distribution-dependent factors capturing the difficulty of distinguishing arms based on risk-adjusted performance. Empirical simulations show that our algorithm significantly outperforms existing algorithms.
Similar Papers
Thompson Sampling-like Algorithms for Stochastic Rising Bandits
Machine Learning (Stat)
Helps computers learn which choices get better.
No-Regret Thompson Sampling for Finite-Horizon Markov Decision Processes with Gaussian Processes
Machine Learning (CS)
Helps smart robots learn faster in new situations.
Thompson Sampling for Multi-Objective Linear Contextual Bandit
Machine Learning (Stat)
Helps computers make better choices with many goals.