On Transportability for Structural Causal Bandits
By: Min Woo Park, Sanghack Lee
Potential Business Impact:
Teaches computers to learn better from different experiences.
Intelligent agents equipped with causal knowledge can optimize their action spaces to avoid unnecessary exploration. The structural causal bandit framework provides a graphical characterization for identifying actions that are unable to maximize rewards by leveraging prior knowledge of the underlying causal structure. While such knowledge enables an agent to estimate the expected rewards of certain actions based on others in online interactions, there has been little guidance on how to transfer information inferred from arbitrary combinations of datasets collected under different conditions -- observational or experimental -- and from heterogeneous environments. In this paper, we investigate the structural causal bandit with transportability, where priors from the source environments are fused to enhance learning in the deployment setting. We demonstrate that it is possible to exploit invariances across environments to consistently improve learning. The resulting bandit algorithm achieves a sub-linear regret bound with an explicit dependence on informativeness of prior data, and it may outperform standard bandit approaches that rely solely on online learning.
Similar Papers
Co-Exploration and Co-Exploitation via Shared Structure in Multi-Task Bandits
Machine Learning (CS)
Learns from many tasks to solve new ones.
Robust Causal Discovery under Imperfect Structural Constraints
Machine Learning (CS)
Finds true causes even with bad clues.
On the identifiability of causal graphs with multiple environments
Machine Learning (Stat)
Finds cause-and-effect relationships using different data.