Co-Exploration and Co-Exploitation via Shared Structure in Multi-Task Bandits
By: Sumantrak Mukherjee , Serafima Lebedeva , Valentin Margraf and more
We propose a novel Bayesian framework for efficient exploration in contextual multi-task multi-armed bandit settings, where the context is only observed partially and dependencies between reward distributions are induced by latent context variables. In order to exploit these structural dependencies, our approach integrates observations across all tasks and learns a global joint distribution, while still allowing personalised inference for new tasks. In this regard, we identify two key sources of epistemic uncertainty, namely structural uncertainty in the latent reward dependencies across arms and tasks, and user-specific uncertainty due to incomplete context and limited interaction history. To put our method into practice, we represent the joint distribution over tasks and rewards using a particle-based approximation of a log-density Gaussian process. This representation enables flexible, data-driven discovery of both inter-arm and inter-task dependencies without prior assumptions on the latent variables. Empirically, we demonstrate that our method outperforms baselines such as hierarchical model bandits, especially in settings with model misspecification or complex latent heterogeneity.
Similar Papers
Empirical Bayesian Multi-Bandit Learning
Machine Learning (CS)
Helps computers make better choices across many jobs.
On Transportability for Structural Causal Bandits
Machine Learning (CS)
Teaches computers to learn better from different experiences.
Learning Peer Influence Probabilities with Linear Contextual Bandits
Machine Learning (CS)
Helps spread good ideas faster online.