Score: 2

Provable Sim-to-Real Transfer via Offline Domain Randomization

Published: June 11, 2025 | arXiv ID: 2506.10133v1

By: Arnaud Fickinger, Abderrahim Bendahi, Stuart Russell

BigTech Affiliations: University of California, Berkeley

Potential Business Impact:

Teaches robots to learn from real-world mistakes.

Business Areas:
A/B Testing Data and Analytics

Reinforcement-learning agents often struggle when deployed from simulation to the real-world. A dominant strategy for reducing the sim-to-real gap is domain randomization (DR) which trains the policy across many simulators produced by sampling dynamics parameters, but standard DR ignores offline data already available from the real system. We study offline domain randomization (ODR), which first fits a distribution over simulator parameters to an offline dataset. While a growing body of empirical work reports substantial gains with algorithms such as DROPO, the theoretical foundations of ODR remain largely unexplored. In this work, we (i) formalize ODR as a maximum-likelihood estimation over a parametric simulator family, (ii) prove consistency of this estimator under mild regularity and identifiability conditions, showing it converges to the true dynamics as the dataset grows, (iii) derive gap bounds demonstrating ODRs sim-to-real error is up to an O(M) factor tighter than uniform DR in the finite-simulator case (and analogous gains in the continuous setting), and (iv) introduce E-DROPO, a new version of DROPO which adds an entropy bonus to prevent variance collapse, yielding broader randomization and more robust zero-shot transfer in practice.

Country of Origin
πŸ‡«πŸ‡· πŸ‡ΊπŸ‡Έ United States, France

Page Count
29 pages

Category
Computer Science:
Machine Learning (CS)