Offline Behavioral Data Selection
By: Shiye Lei, Zhihao Cheng, Dacheng Tao
Behavioral cloning is a widely adopted approach for offline policy learning from expert demonstrations. However, the large scale of offline behavioral datasets often results in computationally intensive training when used in downstream tasks. In this paper, we uncover the striking data saturation in offline behavioral data: policy performance rapidly saturates when trained on a small fraction of the dataset. We attribute this effect to the weak alignment between policy performance and test loss, revealing substantial room for improvement through data selection. To this end, we propose a simple yet effective method, Stepwise Dual Ranking (SDR), which extracts a compact yet informative subset from large-scale offline behavioral datasets. SDR is build on two key principles: (1) stepwise clip, which prioritizes early-stage data; and (2) dual ranking, which selects samples with both high action-value rank and low state-density rank. Extensive experiments and ablation studies on D4RL benchmarks demonstrate that SDR significantly enhances data selection for offline behavioral data.
Similar Papers
Expert or not? assessing data quality in offline reinforcement learning
Machine Learning (CS)
Finds best robot moves from old game data.
State Diversity Matters in Offline Behavior Distillation
Machine Learning (CS)
Makes AI learn better from less data.
From Imitation to Optimization: A Comparative Study of Offline Learning for Autonomous Driving
Machine Learning (CS)
Teaches self-driving cars to avoid crashes.