Adaptive Data Augmentation for Thompson Sampling
By: Wonyoung Kim
Potential Business Impact:
Learns the best choices faster for rewards.
In linear contextual bandits, the objective is to select actions that maximize cumulative rewards, modeled as a linear function with unknown parameters. Although Thompson Sampling performs well empirically, it does not achieve optimal regret bounds. This paper proposes a nearly minimax optimal Thompson Sampling for linear contextual bandits by developing a novel estimator with the adaptive augmentation and coupling of the hypothetical samples that are designed for efficient parameter learning. The proposed estimator accurately predicts rewards for all arms without relying on assumptions for the context distribution. Empirical results show robust performance and significant improvement over existing methods.
Similar Papers
Sparse Nonparametric Contextual Bandits
Machine Learning (Stat)
Helps computers learn best choices faster.
Thompson Sampling for Multi-Objective Linear Contextual Bandit
Machine Learning (Stat)
Helps computers make better choices with many goals.
Constrained Linear Thompson Sampling
Machine Learning (CS)
Helps computers learn safely and faster.