Average-DICE: Stationary Distribution Correction by Regression
By: Fengdi Che , Bryan Chan , Chen Ma and more
Potential Business Impact:
Improves computer learning by fixing data errors.
Off-policy policy evaluation (OPE), an essential component of reinforcement learning, has long suffered from stationary state distribution mismatch, undermining both stability and accuracy of OPE estimates. While existing methods correct distribution shifts by estimating density ratios, they often rely on expensive optimization or backward Bellman-based updates and struggle to outperform simpler baselines. We introduce AVG-DICE, a computationally simple Monte Carlo estimator for the density ratio that averages discounted importance sampling ratios, providing an unbiased and consistent correction. AVG-DICE extends naturally to nonlinear function approximation using regression, which we roughly tune and test on OPE tasks based on Mujoco Gym environments and compare with state-of-the-art density-ratio estimators using their reported hyperparameters. In our experiments, AVG-DICE is at least as accurate as state-of-the-art estimators and sometimes offers orders-of-magnitude improvements. However, a sensitivity analysis shows that best-performing hyperparameters may vary substantially across different discount factors, so a re-tuning is suggested.
Similar Papers
Semi-gradient DICE for Offline Constrained Reinforcement Learning
Machine Learning (CS)
Helps robots learn safely from past experiences.
SEMDICE: Off-policy State Entropy Maximization via Stationary Distribution Correction Estimation
Machine Learning (CS)
Teaches robots to learn new skills faster.
Variational OOD State Correction for Offline Reinforcement Learning
Machine Learning (CS)
Teaches robots to stay in safe areas.