Provable Sim-to-Real Transfer via Offline Domain Randomization
By: Arnaud Fickinger, Abderrahim Bendahi, Stuart Russell
Potential Business Impact:
Teaches robots to learn from real-world mistakes.
Reinforcement-learning agents often struggle when deployed from simulation to the real-world. A dominant strategy for reducing the sim-to-real gap is domain randomization (DR) which trains the policy across many simulators produced by sampling dynamics parameters, but standard DR ignores offline data already available from the real system. We study offline domain randomization (ODR), which first fits a distribution over simulator parameters to an offline dataset. While a growing body of empirical work reports substantial gains with algorithms such as DROPO, the theoretical foundations of ODR remain largely unexplored. In this work, we (i) formalize ODR as a maximum-likelihood estimation over a parametric simulator family, (ii) prove consistency of this estimator under mild regularity and identifiability conditions, showing it converges to the true dynamics as the dataset grows, (iii) derive gap bounds demonstrating ODRs sim-to-real error is up to an O(M) factor tighter than uniform DR in the finite-simulator case (and analogous gains in the continuous setting), and (iv) introduce E-DROPO, a new version of DROPO which adds an entropy bonus to prevent variance collapse, yielding broader randomization and more robust zero-shot transfer in practice.
Similar Papers
Safe Continual Domain Adaptation after Sim2Real Transfer of Reinforcement Learning Policies in Robotics
Robotics
Lets robots learn and change safely in the real world.
Dual-Robust Cross-Domain Offline Reinforcement Learning Against Dynamics Shifts
Machine Learning (CS)
Teaches robots to learn from different experiences safely.
Generalizable Domain Adaptation for Sim-and-Real Policy Co-Training
Robotics
Teaches robots to do tasks with less real practice.