Constrained Feedback Learning for Non-Stationary Multi-Armed Bandits
By: Shaoang Li, Jian Li
Potential Business Impact:
Helps computers learn when they can't always get answers.
Non-stationary multi-armed bandits enable agents to adapt to changing environments by incorporating mechanisms to detect and respond to shifts in reward distributions, making them well-suited for dynamic settings. However, existing approaches typically assume that reward feedback is available at every round - an assumption that overlooks many real-world scenarios where feedback is limited. In this paper, we take a significant step forward by introducing a new model of constrained feedback in non-stationary multi-armed bandits, where the availability of reward feedback is restricted. We propose the first prior-free algorithm - that is, one that does not require prior knowledge of the degree of non-stationarity - that achieves near-optimal dynamic regret in this setting. Specifically, our algorithm attains a dynamic regret of $\tilde{\mathcal{O}}({K^{1/3} V_T^{1/3} T }/{ B^{1/3}})$, where $T$ is the number of rounds, $K$ is the number of arms, $B$ is the query budget, and $V_T$ is the variation budget capturing the degree of non-stationarity.
Similar Papers
Finite-Time Guarantees for Multi-Agent Combinatorial Bandits with Nonstationary Rewards
Machine Learning (CS)
Helps health programs reach more people effectively.
Non-Stationary Restless Multi-Armed Bandits with Provable Guarantee
Machine Learning (CS)
Helps computers learn when things change.
Fooling Algorithms in Non-Stationary Bandits using Belief Inertia
Machine Learning (CS)
Makes smart guessing games learn faster from changes.