Distorted Distributional Policy Evaluation for Offline Reinforcement Learning
By: Ryo Iwaki, Takayuki Osogami
Potential Business Impact:
Makes AI learn better from past information.
While Distributional Reinforcement Learning (DRL) methods have demonstrated strong performance in online settings, its success in offline scenarios remains limited. We hypothesize that a key limitation of existing offline DRL methods lies in their approach to uniformly underestimate return quantiles. This uniform pessimism can lead to overly conservative value estimates, ultimately hindering generalization and performance. To address this, we introduce a novel concept called quantile distortion, which enables non-uniform pessimism by adjusting the degree of conservatism based on the availability of supporting data. Our approach is grounded in theoretical analysis and empirically validated, demonstrating improved performance over uniform pessimism.
Similar Papers
Provably Near-Optimal Distributionally Robust Reinforcement Learning in Online Settings
Machine Learning (CS)
Teaches robots to work safely in new places.
Distributional Inverse Reinforcement Learning
Machine Learning (CS)
Learns how to do things by watching experts.
Conformal Prediction Beyond the Horizon: Distribution-Free Inference for Policy Evaluation
Machine Learning (Stat)
Makes AI safer by showing when it's unsure.