Unifying Causal Reinforcement Learning: Survey, Taxonomy, Algorithms and Applications
By: Cristiano da Costa Cunha , Wei Liu , Tim French and more
Integrating causal inference (CI) with reinforcement learning (RL) has emerged as a powerful paradigm to address critical limitations in classical RL, including low explainability, lack of robustness and generalization failures. Traditional RL techniques, which typically rely on correlation-driven decision-making, struggle when faced with distribution shifts, confounding variables, and dynamic environments. Causal reinforcement learning (CRL), leveraging the foundational principles of causal inference, offers promising solutions to these challenges by explicitly modeling cause-and-effect relationships. In this survey, we systematically review recent advancements at the intersection of causal inference and RL. We categorize existing approaches into causal representation learning, counterfactual policy optimization, offline causal RL, causal transfer learning, and causal explainability. Through this structured analysis, we identify prevailing challenges, highlight empirical successes in practical applications, and discuss open problems. Finally, we provide future research directions, underscoring the potential of CRL for developing robust, generalizable, and interpretable artificial intelligence systems.
Similar Papers
Towards Interpretable Deep Generative Models via Causal Representation Learning
Machine Learning (Stat)
Makes AI understand how things cause each other.
A Roadmap Towards Improving Multi-Agent Reinforcement Learning With Causal Discovery And Inference
Machine Learning (CS)
Helps robot teams learn to work together better.
Confounding Robust Deep Reinforcement Learning: A Causal Approach
Artificial Intelligence
Makes AI learn safely from bad past game data.