Learning to Act Through Contact: A Unified View of Multi-Task Robot Learning
By: Shafeef Omar, Majid Khadiv
Potential Business Impact:
Robot learns many jobs with one brain.
We present a unified framework for multi-task locomotion and manipulation policy learning grounded in a contact-explicit representation. Instead of designing different policies for different tasks, our approach unifies the definition of a task through a sequence of contact goals-desired contact positions, timings, and active end-effectors. This enables leveraging the shared structure across diverse contact-rich tasks, leading to a single policy that can perform a wide range of tasks. In particular, we train a goal-conditioned reinforcement learning (RL) policy to realise given contact plans. We validate our framework on multiple robotic embodiments and tasks: a quadruped performing multiple gaits, a humanoid performing multiple biped and quadrupedal gaits, and a humanoid executing different bimanual object manipulation tasks. Each of these scenarios is controlled by a single policy trained to execute different tasks grounded in contacts, demonstrating versatile and robust behaviours across morphologically distinct systems. Our results show that explicit contact reasoning significantly improves generalisation to unseen scenarios, positioning contact-explicit policy learning as a promising foundation for scalable loco-manipulation.
Similar Papers
A Survey on Imitation Learning for Contact-Rich Tasks in Robotics
Robotics
Teaches robots to do tricky jobs by watching.
Robust Model-Based In-Hand Manipulation with Integrated Real-Time Motion-Contact Planning and Tracking
Robotics
Robots can now pick up and move things smoothly.
ContactRL: Safe Reinforcement Learning based Motion Planning for Contact based Human Robot Collaboration
Robotics
Robots learn to touch people safely during work.