Density-Ratio Weighted Behavioral Cloning: Learning Control Policies from Corrupted Datasets
By: Shriram Karpoora Sundara Pandian, Ali Baheri
Potential Business Impact:
Filters bad data so robots learn better.
Offline reinforcement learning (RL) enables policy optimization from fixed datasets, making it suitable for safety-critical applications where online exploration is infeasible. However, these datasets are often contaminated by adversarial poisoning, system errors, or low-quality samples, leading to degraded policy performance in standard behavioral cloning (BC) and offline RL methods. This paper introduces Density-Ratio Weighted Behavioral Cloning (Weighted BC), a robust imitation learning approach that uses a small, verified clean reference set to estimate trajectory-level density ratios via a binary discriminator. These ratios are clipped and used as weights in the BC objective to prioritize clean expert behavior while down-weighting or discarding corrupted data, without requiring knowledge of the contamination mechanism. We establish theoretical guarantees showing convergence to the clean expert policy with finite-sample bounds that are independent of the contamination rate. A comprehensive evaluation framework is established, which incorporates various poisoning protocols (reward, state, transition, and action) on continuous control benchmarks. Experiments demonstrate that Weighted BC maintains near-optimal performance even at high contamination ratios outperforming baselines such as traditional BC, batch-constrained Q-learning (BCQ) and behavior regularized actor-critic (BRAC).
Similar Papers
Expert or not? assessing data quality in offline reinforcement learning
Machine Learning (CS)
Finds best robot moves from old game data.
From Imitation to Optimization: A Comparative Study of Offline Learning for Autonomous Driving
Machine Learning (CS)
Teaches self-driving cars to learn from mistakes.
Residual Off-Policy RL for Finetuning Behavior Cloning Policies
Robotics
Teaches robots to do tasks better by learning from mistakes.