Learning to Plan & Schedule with Reinforcement-Learned Bimanual Robot Skills
By: Weikang Wan , Fabio Ramos , Xuning Yang and more
Potential Business Impact:
Robots learn to use both hands together better.
Long-horizon contact-rich bimanual manipulation presents a significant challenge, requiring complex coordination involving a mixture of parallel execution and sequential collaboration between arms. In this paper, we introduce a hierarchical framework that frames this challenge as an integrated skill planning & scheduling problem, going beyond purely sequential decision-making to support simultaneous skill invocation. Our approach is built upon a library of single-arm and bimanual primitive skills, each trained using Reinforcement Learning (RL) in GPU-accelerated simulation. We then train a Transformer-based planner on a dataset of skill compositions to act as a high-level scheduler, simultaneously predicting the discrete schedule of skills as well as their continuous parameters. We demonstrate that our method achieves higher success rates on complex, contact-rich tasks than end-to-end RL approaches and produces more efficient, coordinated behaviors than traditional sequential-only planners.
Similar Papers
LLM+MAP: Bimanual Robot Task Planning using Large Language Models and Planning Domain Definition Language
Robotics
Robots use AI to plan two-handed tasks better.
Learning Bimanual Manipulation via Action Chunking and Inter-Arm Coordination with Transformers
Robotics
Robots learn to use two hands together better.
Scene-agnostic Hierarchical Bimanual Task Planning via Visual Affordance Reasoning
Robotics
Robots use two hands to do tasks better.