Score: 0

Causal Policy Learning in Reinforcement Learning: Backdoor-Adjusted Soft Actor-Critic

Published: June 5, 2025 | arXiv ID: 2506.05445v1

By: Thanh Vinh Vo , Young Lee , Haozhe Ma and more

Potential Business Impact:

Teaches robots to learn from mistakes better.

Business Areas:
A/B Testing Data and Analytics

Hidden confounders that influence both states and actions can bias policy learning in reinforcement learning (RL), leading to suboptimal or non-generalizable behavior. Most RL algorithms ignore this issue, learning policies from observational trajectories based solely on statistical associations rather than causal effects. We propose DoSAC (Do-Calculus Soft Actor-Critic with Backdoor Adjustment), a principled extension of the SAC algorithm that corrects for hidden confounding via causal intervention estimation. DoSAC estimates the interventional policy $\pi(a | \mathrm{do}(s))$ using the backdoor criterion, without requiring access to true confounders or causal labels. To achieve this, we introduce a learnable Backdoor Reconstructor that infers pseudo-past variables (previous state and action) from the current state to enable backdoor adjustment from observational data. This module is integrated into a soft actor-critic framework to compute both the interventional policy and its entropy. Empirical results on continuous control benchmarks show that DoSAC outperforms baselines under confounded settings, with improved robustness, generalization, and policy reliability.

Page Count
19 pages

Category
Computer Science:
Machine Learning (CS)