Score: 0

Towards Consistent Long-Term Pose Generation

Published: July 24, 2025 | arXiv ID: 2507.18382v1

By: Yayuan Li, Filippos Bellos, Jason Corso

Potential Business Impact:

Makes computer animations move smoothly and realistically.

Current approaches to pose generation rely heavily on intermediate representations, either through two-stage pipelines with quantization or autoregressive models that accumulate errors during inference. This fundamental limitation leads to degraded performance, particularly in long-term pose generation where maintaining temporal coherence is crucial. We propose a novel one-stage architecture that directly generates poses in continuous coordinate space from minimal context - a single RGB image and text description - while maintaining consistent distributions between training and inference. Our key innovation is eliminating the need for intermediate representations or token-based generation by operating directly on pose coordinates through a relative movement prediction mechanism that preserves spatial relationships, and a unified placeholder token approach that enables single-forward generation with identical behavior during training and inference. Through extensive experiments on Penn Action and First-Person Hand Action Benchmark (F-PHAB) datasets, we demonstrate that our approach significantly outperforms existing quantization-based and autoregressive methods, especially in long-term generation scenarios.

Country of Origin
🇺🇸 United States

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition