Spatial-Temporal Multi-Scale Quantization for Flexible Motion Generation
By: Zan Wang , Jingze Zhang , Yixin Chen and more
Potential Business Impact:
Makes computer-made movements look more real.
Despite significant advancements in human motion generation, current motion representations, typically formulated as discrete frame sequences, still face two critical limitations: (i) they fail to capture motion from a multi-scale perspective, limiting the capability in complex patterns modeling; (ii) they lack compositional flexibility, which is crucial for model's generalization in diverse generation tasks. To address these challenges, we introduce MSQ, a novel quantization method that compresses the motion sequence into multi-scale discrete tokens across spatial and temporal dimensions. MSQ employs distinct encoders to capture body parts at varying spatial granularities and temporally interpolates the encoded features into multiple scales before quantizing them into discrete tokens. Building on this representation, we establish a generative mask modeling model to effectively support motion editing, motion control, and conditional motion generation. Through quantitative and qualitative analysis, we show that our quantization method enables the seamless composition of motion tokens without requiring specialized design or re-training. Furthermore, extensive evaluations demonstrate that our approach outperforms existing baseline methods on various benchmarks.
Similar Papers
MoSa: Motion Generation with Scalable Autoregressive Modeling
CV and Pattern Recognition
Makes computer-made people move more realistically.
Making Pose Representations More Expressive and Disentangled via Residual Vector Quantization
CV and Pattern Recognition
Makes computer-made people move more realistically.
Generative Action Tell-Tales: Assessing Human Motion in Synthesized Videos
CV and Pattern Recognition
Checks if fake human videos look real.