When a Robot is More Capable than a Human: Learning from Constrained Demonstrators
By: Xinhu Li , Ayush Jain , Zhaojing Yang and more
Potential Business Impact:
Robots learn faster by exploring, not just copying.
Learning from demonstrations enables experts to teach robots complex tasks using interfaces such as kinesthetic teaching, joystick control, and sim-to-real transfer. However, these interfaces often constrain the expert's ability to demonstrate optimal behavior due to indirect control, setup restrictions, and hardware safety. For example, a joystick can move a robotic arm only in a 2D plane, even though the robot operates in a higher-dimensional space. As a result, the demonstrations collected by constrained experts lead to suboptimal performance of the learned policies. This raises a key question: Can a robot learn a better policy than the one demonstrated by a constrained expert? We address this by allowing the agent to go beyond direct imitation of expert actions and explore shorter and more efficient trajectories. We use the demonstrations to infer a state-only reward signal that measures task progress, and self-label reward for unknown states using temporal interpolation. Our approach outperforms common imitation learning in both sample efficiency and task completion time. On a real WidowX robotic arm, it completes the task in 12 seconds, 10x faster than behavioral cloning, as shown in real-robot videos on https://sites.google.com/view/constrainedexpert .
Similar Papers
Imitation Learning with Precisely Labeled Human Demonstrations
Robotics
Teaches robots to learn from human actions.
RoboCopilot: Human-in-the-loop Interactive Imitation Learning for Robot Manipulation
Robotics
Robots learn new skills faster by working with people.
Instrumentation for Better Demonstrations: A Case Study
Robotics
Robot learns to pour drinks better with sensors.