Self-evolved Imitation Learning in Simulated World
By: Yifan Ye , Jun Cen , Jing Chen and more
Potential Business Impact:
Teaches robots new tricks with fewer examples.
Imitation learning has been a trend recently, yet training a generalist agent across multiple tasks still requires large-scale expert demonstrations, which are costly and labor-intensive to collect. To address the challenge of limited supervision, we propose Self-Evolved Imitation Learning (SEIL), a framework that progressively improves a few-shot model through simulator interactions. The model first attempts tasksin the simulator, from which successful trajectories are collected as new demonstrations for iterative refinement. To enhance the diversity of these demonstrations, SEIL employs dual-level augmentation: (i) Model-level, using an Exponential Moving Average (EMA) model to collaborate with the primary model, and (ii) Environment-level, introducing slight variations in initial object positions. We further introduce a lightweight selector that filters complementary and informative trajectories from the generated pool to ensure demonstration quality. These curated samples enable the model to achieve competitive performance with far fewer training examples. Extensive experiments on the LIBERO benchmark show that SEIL achieves a new state-of-the-art performance in few-shot imitation learning scenarios. Code is available at https://github.com/Jasper-aaa/SEIL.git.
Similar Papers
Self-Adapting Improvement Loops for Robotic Learning
Robotics
Robots learn new tasks by watching and practicing.
LLM-based Interactive Imitation Learning for Robotic Manipulation
Robotics
Teaches robots using AI, not people.
Parental Guidance: Efficient Lifelong Learning through Evolutionary Distillation
Robotics
Robots learn many skills by copying and improving.