Score: 0

Vector preference-based contextual bandits under distributional shifts

Published: August 21, 2025 | arXiv ID: 2508.15966v1

By: Apurv Shukla, P. R. Kumar

Potential Business Impact:

Helps computers learn better when things change.

Business Areas:
A/B Testing Data and Analytics

We consider contextual bandit learning under distribution shift when reward vectors are ordered according to a given preference cone. We propose an adaptive-discretization and optimistic elimination based policy that self-tunes to the underlying distribution shift. To measure the performance of this policy, we introduce the notion of preference-based regret which measures the performance of a policy in terms of distance between Pareto fronts. We study the performance of this policy by establishing upper bounds on its regret under various assumptions on the nature of distribution shift. Our regret bounds generalize known results for the existing case of no distribution shift and vectorial reward settings, and scale gracefully with problem parameters in presence of distribution shifts.

Country of Origin
🇺🇸 United States

Page Count
25 pages

Category
Computer Science:
Machine Learning (CS)