Score: 0

Distorted Distributional Policy Evaluation for Offline Reinforcement Learning

Published: January 5, 2026 | arXiv ID: 2601.01917v1

By: Ryo Iwaki, Takayuki Osogami

Potential Business Impact:

Makes AI learn better from past information.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

While Distributional Reinforcement Learning (DRL) methods have demonstrated strong performance in online settings, its success in offline scenarios remains limited. We hypothesize that a key limitation of existing offline DRL methods lies in their approach to uniformly underestimate return quantiles. This uniform pessimism can lead to overly conservative value estimates, ultimately hindering generalization and performance. To address this, we introduce a novel concept called quantile distortion, which enables non-uniform pessimism by adjusting the degree of conservatism based on the availability of supporting data. Our approach is grounded in theoretical analysis and empirically validated, demonstrating improved performance over uniform pessimism.

Page Count
15 pages

Category
Computer Science:
Machine Learning (CS)