Thompson Sampling-like Algorithms for Stochastic Rising Bandits
By: Marco Fiandri, Alberto Maria Metelli, Francesco Trovò
Potential Business Impact:
Helps computers learn which choices get better.
Stochastic rising rested bandit (SRRB) is a setting where the arms' expected rewards increase as they are pulled. It models scenarios in which the performances of the different options grow as an effect of an underlying learning process (e.g., online model selection). Even if the bandit literature provides specifically crafted algorithms based on upper-confidence bounds for such a setting, no study about Thompson sampling TS-like algorithms has been performed so far. The strong regularity of the expected rewards in the SRRB setting suggests that specific instances may be tackled effectively using adapted and sliding-window TS approaches. This work provides novel regret analyses for such algorithms in SRRBs, highlighting the challenges and providing new technical tools of independent interest. Our results allow us to identify under which assumptions TS-like algorithms succeed in achieving sublinear regret and which properties of the environment govern the complexity of the regret minimization problem when approached with TS. Furthermore, we provide a regret lower bound based on a complexity index we introduce. Finally, we conduct numerical simulations comparing TS-like algorithms with state-of-the-art approaches for SRRBs in synthetic and real-world settings.
Similar Papers
Order Optimal Regret Bounds for Sharpe Ratio Optimization in the Bandit Setting
Machine Learning (CS)
Helps computers make smarter choices with less risk.
Power Constrained Nonstationary Bandits with Habituation and Recovery Dynamics
Machine Learning (CS)
Helps doctors find best treatments for everyone.
A Broader View of Thompson Sampling
Machine Learning (CS)
Explains how a smart computer choice system works.