Vector preference-based contextual bandits under distributional shifts
By: Apurv Shukla, P. R. Kumar
Potential Business Impact:
Helps computers learn better when things change.
We consider contextual bandit learning under distribution shift when reward vectors are ordered according to a given preference cone. We propose an adaptive-discretization and optimistic elimination based policy that self-tunes to the underlying distribution shift. To measure the performance of this policy, we introduce the notion of preference-based regret which measures the performance of a policy in terms of distance between Pareto fronts. We study the performance of this policy by establishing upper bounds on its regret under various assumptions on the nature of distribution shift. Our regret bounds generalize known results for the existing case of no distribution shift and vectorial reward settings, and scale gracefully with problem parameters in presence of distribution shifts.
Similar Papers
Incentivized Lipschitz Bandits
Machine Learning (CS)
Helps robots learn faster with smart rewards.
Preference-centric Bandits: Optimality of Mixtures and Regret-efficient Algorithms
Machine Learning (Stat)
Helps computers choose the best option, even with risks.
Recycling History: Efficient Recommendations from Contextual Dueling Bandits
Machine Learning (CS)
Helps apps learn what you like faster.