HIL: Hybrid Imitation Learning of Diverse Parkour Skills from Videos
By: Jiashun Wang , Yifeng Jiang , Haotian Zhang and more
Potential Business Impact:
Makes game characters do amazing parkour moves.
Recent data-driven methods leveraging deep reinforcement learning have been an effective paradigm for developing controllers that enable physically simulated characters to produce natural human-like behaviors. However, these data-driven methods often struggle to adapt to novel environments and compose diverse skills coherently to perform more complex tasks. To address these challenges, we propose a hybrid imitation learning (HIL) framework that combines motion tracking, for precise skill replication, with adversarial imitation learning, to enhance adaptability and skill composition. This hybrid learning framework is implemented through parallel multi-task environments and a unified observation space, featuring an agent-centric scene representation to facilitate effective learning from the hybrid parallel environments. Our framework trains a unified controller on parkour data sourced from Internet videos, enabling a simulated character to traverse through new environments using diverse and life-like parkour skills. Evaluations across challenging parkour environments demonstrate that our method improves motion quality, increases skill diversity, and achieves competitive task completion compared to previous learning-based methods.
Similar Papers
HiLo: Learning Whole-Body Human-like Locomotion with Motion Tracking Controller
Robotics
Robots walk and move like people.
NIL: No-data Imitation Learning by Leveraging Pre-trained Video Diffusion Models
CV and Pattern Recognition
Teaches robots new moves from generated videos.
Offline Learning of Controllable Diverse Behaviors
Machine Learning (CS)
Lets robots learn and do many different jobs.