Towards Consistent Long-Term Pose Generation
By: Yayuan Li, Filippos Bellos, Jason Corso
Potential Business Impact:
Makes computer animations move smoothly and realistically.
Current approaches to pose generation rely heavily on intermediate representations, either through two-stage pipelines with quantization or autoregressive models that accumulate errors during inference. This fundamental limitation leads to degraded performance, particularly in long-term pose generation where maintaining temporal coherence is crucial. We propose a novel one-stage architecture that directly generates poses in continuous coordinate space from minimal context - a single RGB image and text description - while maintaining consistent distributions between training and inference. Our key innovation is eliminating the need for intermediate representations or token-based generation by operating directly on pose coordinates through a relative movement prediction mechanism that preserves spatial relationships, and a unified placeholder token approach that enables single-forward generation with identical behavior during training and inference. Through extensive experiments on Penn Action and First-Person Hand Action Benchmark (F-PHAB) datasets, we demonstrate that our approach significantly outperforms existing quantization-based and autoregressive methods, especially in long-term generation scenarios.
Similar Papers
Making Pose Representations More Expressive and Disentangled via Residual Vector Quantization
CV and Pattern Recognition
Makes computer-made people move more realistically.
A Coarse-to-Fine Human Pose Estimation Method based on Two-stage Distillation and Progressive Graph Neural Network
CV and Pattern Recognition
Makes computers see people's body poses faster.
TalkingPose: Efficient Face and Gesture Animation with Feedback-guided Diffusion Model
CV and Pattern Recognition
Creates long, smooth talking animations from pictures.