Incentivized Lipschitz Bandits
By: Sourav Chakraborty , Amit Kiran Rege , Claire Monteleoni and more
Potential Business Impact:
Helps robots learn faster with smart rewards.
We study incentivized exploration in multi-armed bandit (MAB) settings with infinitely many arms modeled as elements in continuous metric spaces. Unlike classical bandit models, we consider scenarios where the decision-maker (principal) incentivizes myopic agents to explore beyond their greedy choices through compensation, but with the complication of reward drift--biased feedback arising due to the incentives. We propose novel incentivized exploration algorithms that discretize the infinite arm space uniformly and demonstrate that these algorithms simultaneously achieve sublinear cumulative regret and sublinear total compensation. Specifically, we derive regret and compensation bounds of $\Tilde{O}(T^{d+1/d+2})$, with $d$ representing the covering dimension of the metric space. Furthermore, we generalize our results to contextual bandits, achieving comparable performance guarantees. We validate our theoretical findings through numerical simulations.
Similar Papers
Lipschitz Bandits with Stochastic Delayed Feedback
Machine Learning (CS)
Helps computers learn faster with delayed rewards.
Quantum Lipschitz Bandits
Machine Learning (CS)
Quantum computers learn faster to pick best choices.
Cascading Bandits With Feedback
Machine Learning (CS)
Helps smart devices choose the best AI model.