Learning Vision-Driven Reactive Soccer Skills for Humanoid Robots
By: Yushi Wang , Changsheng Luo , Penghui Chen and more
Potential Business Impact:
Robots learn to play soccer by seeing and moving.
Humanoid soccer poses a representative challenge for embodied intelligence, requiring robots to operate within a tightly coupled perception-action loop. However, existing systems typically rely on decoupled modules, resulting in delayed responses and incoherent behaviors in dynamic environments, while real-world perceptual limitations further exacerbate these issues. In this work, we present a unified reinforcement learning-based controller that enables humanoid robots to acquire reactive soccer skills through the direct integration of visual perception and motion control. Our approach extends Adversarial Motion Priors to perceptual settings in real-world dynamic environments, bridging motion imitation and visually grounded dynamic control. We introduce an encoder-decoder architecture combined with a virtual perception system that models real-world visual characteristics, allowing the policy to recover privileged states from imperfect observations and establish active coordination between perception and action. The resulting controller demonstrates strong reactivity, consistently executing coherent and robust soccer behaviors across various scenarios, including real RoboCup matches.
Similar Papers
Humanoid Goalkeeper: Learning from Position Conditioned Task-Motion Constraints
Robotics
Robot learns to block fast balls like a goalie.
SoccerDiffusion: Toward Learning End-to-End Humanoid Robot Soccer from Gameplay Recordings
Robotics
Robots learn to play soccer by watching games.
Deep Sensorimotor Control by Imitating Predictive Models of Human Motion
Robotics
Robots learn to move by watching humans.