Sample-Efficient Policy Constraint Offline Deep Reinforcement Learning based on Sample Filtering
By: Yuanhao Chen , Qi Liu , Pengbin Chen and more
Offline reinforcement learning (RL) aims to learn a policy that maximizes the expected return using a given static dataset of transitions. However, offline RL faces the distribution shift problem. The policy constraint offline RL method is proposed to solve the distribution shift problem. During the policy constraint offline RL training, it is important to ensure the difference between the learned policy and behavior policy within a given threshold. Thus, the learned policy heavily relies on the quality of the behavior policy. However, a problem exists in existing policy constraint methods: if the dataset contains many low-reward transitions, the learned will be contained with a suboptimal reference policy, leading to slow learning speed, low sample efficiency, and inferior performances. This paper shows that the sampling method in policy constraint offline RL that uses all the transitions in the dataset can be improved. A simple but efficient sample filtering method is proposed to improve the sample efficiency and the final performance. First, we evaluate the score of the transitions by average reward and average discounted reward of episodes in the dataset and extract the transition samples of high scores. Second, the high-score transition samples are used to train the offline RL algorithms. We verify the proposed method in a series of offline RL algorithms and benchmark tasks. Experimental results show that the proposed method outperforms baselines.
Similar Papers
Adaptive Scaling of Policy Constraints for Offline Reinforcement Learning
Machine Learning (CS)
Teaches computers to learn from old data better.
Adaptive Neighborhood-Constrained Q Learning for Offline Reinforcement Learning
Machine Learning (CS)
Helps robots learn from past mistakes safely.
Safe Reinforcement Learning with Minimal Supervision
Machine Learning (CS)
Teaches robots to learn safely with less data.