Integrating Learning-Based Manipulation and Physics-Based Locomotion for Whole-Body Badminton Robot Control
By: Haochen Wang , Zhiwei Shi , Chengxi Zhu and more
Potential Business Impact:
Robot learns to play badminton by watching and trying.
Learning-based methods, such as imitation learning (IL) and reinforcement learning (RL), can produce excel control policies over challenging agile robot tasks, such as sports robot. However, no existing work has harmonized learning-based policy with model-based methods to reduce training complexity and ensure the safety and stability for agile badminton robot control. In this paper, we introduce Hamlet, a novel hybrid control system for agile badminton robots. Specifically, we propose a model-based strategy for chassis locomotion which provides a base for arm policy. We introduce a physics-informed "IL+RL" training framework for learning-based arm policy. In this train framework, a model-based strategy with privileged information is used to guide arm policy training during both IL and RL phases. In addition, we train the critic model during IL phase to alleviate the performance drop issue when transitioning from IL to RL. We present results on our self-engineered badminton robot, achieving 94.5% success rate against the serving machine and 90.7% success rate against human players. Our system can be easily generalized to other agile mobile manipulation tasks such as agile catching and table tennis. Our project website: https://dreamstarring.github.io/HAMLET/.
Similar Papers
Learning coordinated badminton skills for legged manipulators
Robotics
Robot plays badminton by seeing and moving.
Humanoid Whole-Body Badminton via Multi-Stage Reinforcement Learning
Robotics
Robot plays badminton by learning to hit the ball.
Efficient Learning of A Unified Policy For Whole-body Manipulation and Locomotion Skills
Robotics
Robots learn to move and grab better.