Score: 2

Mildly Conservative Regularized Evaluation for Offline Reinforcement Learning

Published: August 8, 2025 | arXiv ID: 2508.05960v1

By: Haohui Chen, Zhiyong Chen

Potential Business Impact:

Teaches computers to learn from old data safely.

Offline reinforcement learning (RL) seeks to learn optimal policies from static datasets without further environment interaction. A key challenge is the distribution shift between the learned and behavior policies, leading to out-of-distribution (OOD) actions and overestimation. To prevent gross overestimation, the value function must remain conservative; however, excessive conservatism may hinder performance improvement. To address this, we propose the mildly conservative regularized evaluation (MCRE) framework, which balances conservatism and performance by combining temporal difference (TD) error with a behavior cloning term in the Bellman backup. Building on this, we develop the mildly conservative regularized Q-learning (MCRQ) algorithm, which integrates MCRE into an off-policy actor-critic framework. Experiments show that MCRQ outperforms strong baselines and state-of-the-art offline RL algorithms on benchmark datasets.

Country of Origin
🇨🇳 🇦🇺 Australia, China

Page Count
14 pages

Category
Computer Science:
Machine Learning (CS)