Reinforcement learning-based motion imitation for physiologically plausible musculoskeletal motor control
By: Merkourios Simos, Alberto Silvio Chiappa, Alexander Mathis
Potential Business Impact:
Makes robots move like real people.
How do humans move? The quest to understand human motion has broad applications in numerous fields, ranging from computer animation and motion synthesis to neuroscience, human prosthetics and rehabilitation. Although advances in reinforcement learning (RL) have produced impressive results in capturing human motion using simplified humanoids, controlling physiologically accurate models of the body remains an open challenge. In this work, we present a model-free motion imitation framework (KINESIS) to advance the understanding of muscle-based motor control. Using a musculoskeletal model of the lower body with 80 muscle actuators and 20 DoF, we demonstrate that KINESIS achieves strong imitation performance on 1.9 hours of motion capture data, is controllable by natural language through pre-trained text-to-motion generative models, and can be fine-tuned to carry out high-level tasks such as target goal reaching. Importantly, KINESIS generates muscle activity patterns that correlate well with human EMG activity. The physiological plausibility makes KINESIS a promising model for tackling challenging problems in human motor control theory, which we highlight by investigating Bernstein's redundancy problem in the context of locomotion. Code, videos and benchmarks will be available at https://github.com/amathislab/Kinesis.
Similar Papers
Massively Parallel Imitation Learning of Mouse Forelimb Musculoskeletal Reaching Dynamics
Machine Learning (CS)
Teaches robots to move like real animals.
Deep Sensorimotor Control by Imitating Predictive Models of Human Motion
Robotics
Robots learn to move by watching humans.
KungfuBot: Physics-Based Humanoid Whole-Body Control for Learning Highly-Dynamic Skills
Robotics
Robots learn to copy fast, tricky human moves.