Automatic Reward Shaping from Confounded Offline Data
By: Mingxuan Li, Junzhe Zhang, Elias Bareinboim
Potential Business Impact:
Makes AI learn safely from bad past game experiences.
A key task in Artificial Intelligence is learning effective policies for controlling agents in unknown environments to optimize performance measures. Off-policy learning methods, like Q-learning, allow learners to make optimal decisions based on past experiences. This paper studies off-policy learning from biased data in complex and high-dimensional domains where \emph{unobserved confounding} cannot be ruled out a priori. Building on the well-celebrated Deep Q-Network (DQN), we propose a novel deep reinforcement learning algorithm robust to confounding biases in observed data. Specifically, our algorithm attempts to find a safe policy for the worst-case environment compatible with the observations. We apply our method to twelve confounded Atari games, and find that it consistently dominates the standard DQN in all games where the observed input to the behavioral and target policies mismatch and unobserved confounders exist.
Similar Papers
Confounding Robust Deep Reinforcement Learning: A Causal Approach
Artificial Intelligence
Makes AI learn safely from bad past game data.
Quantile-Optimal Policy Learning under Unmeasured Confounding
Machine Learning (Stat)
Finds best decisions even with missing info.
Reinforcement Learning with Continuous Actions Under Unmeasured Confounding
Machine Learning (Stat)
Teaches computers to make the best choices.