Reliable and Scalable Robot Policy Evaluation with Imperfect Simulators
By: Apurva Badithela , David Snyder , Lihan Zha and more
Potential Business Impact:
Tests robots better with fewer real-world tries.
Rapid progress in imitation learning, foundation models, and large-scale datasets has led to robot manipulation policies that generalize to a wide-range of tasks and environments. However, rigorous evaluation of these policies remains a challenge. Typically in practice, robot policies are often evaluated on a small number of hardware trials without any statistical assurances. We present SureSim, a framework to augment large-scale simulation with relatively small-scale real-world testing to provide reliable inferences on the real-world performance of a policy. Our key idea is to formalize the problem of combining real and simulation evaluations as a prediction-powered inference problem, in which a small number of paired real and simulation evaluations are used to rectify bias in large-scale simulation. We then leverage non-asymptotic mean estimation algorithms to provide confidence intervals on mean policy performance. Using physics-based simulation, we evaluate both diffusion policy and multi-task fine-tuned \(\pi_0\) on a joint distribution of objects and initial conditions, and find that our approach saves over \(20-25\%\) of hardware evaluation effort to achieve similar bounds on policy performance.
Similar Papers
Is Your Imitation Learning Policy Better than Mine? Policy Comparison with Near-Optimal Stopping
Robotics
Robot learning tests finish faster, saving time.
Robot Policy Evaluation for Sim-to-Real Transfer: A Benchmarking Perspective
Robotics
Helps robots learn in games, then work in real life.
Generalizable Domain Adaptation for Sim-and-Real Policy Co-Training
Robotics
Teaches robots to do tasks with less real practice.