Score: 2

Taming OOD Actions for Offline Reinforcement Learning: An Advantage-Based Approach

Published: May 8, 2025 | arXiv ID: 2505.05126v3

By: Xuyang Chen, Keyu Yan, Lin Zhao

Potential Business Impact:

Helps robots learn better from past mistakes.

Business Areas:
Autonomous Vehicles Transportation

Offline reinforcement learning (RL) aims to learn decision-making policies from fixed datasets without online interactions, providing a practical solution where online data collection is expensive or risky. However, offline RL often suffers from distribution shift, resulting in inaccurate evaluation and substantial overestimation on out-of-distribution (OOD) actions. To address this, existing approaches incorporate conservatism by indiscriminately discouraging all OOD actions, thereby hindering the agent's ability to generalize and exploit beneficial ones. In this paper, we propose Advantage-based Diffusion Actor-Critic (ADAC), a novel method that systematically evaluates OOD actions using the batch-optimal value function. Based on this evaluation, ADAC defines an advantage function to modulate the Q-function update, enabling more precise assessment of OOD action quality. We design a custom PointMaze environment and collect datasets to visually reveal that advantage modulation can effectively identify and select superior OOD actions. Extensive experiments show that ADAC achieves state-of-the-art performance on almost all tasks in the D4RL benchmark, with particularly clear margins on the more challenging tasks.

Country of Origin
πŸ‡ΈπŸ‡¬ Singapore

Repos / Data Links

Page Count
21 pages

Category
Computer Science:
Machine Learning (CS)