HumanCM: One Step Human Motion Prediction
By: Liu Haojie, Gao Suixiang
Potential Business Impact:
Makes computer-animated people move more realistically.
We present HumanCM, a one-step human motion prediction framework built upon consistency models. Instead of relying on multi-step denoising as in diffusion-based methods, HumanCM performs efficient single-step generation by learning a self-consistent mapping between noisy and clean motion states. The framework adopts a Transformer-based spatiotemporal architecture with temporal embeddings to model long-range dependencies and preserve motion coherence. Experiments on Human3.6M and HumanEva-I demonstrate that HumanCM achieves comparable or superior accuracy to state-of-the-art diffusion models while reducing inference steps by up to two orders of magnitude.
Similar Papers
One-shot Humanoid Whole-body Motion Learning
Robotics
Robots learn new moves from just one example.
A Spatio-temporal Continuous Network for Stochastic 3D Human Motion Prediction
CV and Pattern Recognition
Predicts human moves smoothly and variably
Human Motion Prediction via Test-domain-aware Adaptation with Easily-available Human Motions Estimated from Videos
CV and Pattern Recognition
Teaches computers to predict human movement better.