HY-Motion 1.0: Scaling Flow Matching Models for Text-To-Motion Generation
By: Yuxin Wen , Qing Shuai , Di Kang and more
Potential Business Impact:
Creates realistic 3D human movements from words.
We present HY-Motion 1.0, a series of state-of-the-art, large-scale, motion generation models capable of generating 3D human motions from textual descriptions. HY-Motion 1.0 represents the first successful attempt to scale up Diffusion Transformer (DiT)-based flow matching models to the billion-parameter scale within the motion generation domain, delivering instruction-following capabilities that significantly outperform current open-source benchmarks. Uniquely, we introduce a comprehensive, full-stage training paradigm -- including large-scale pretraining on over 3,000 hours of motion data, high-quality fine-tuning on 400 hours of curated data, and reinforcement learning from both human feedback and reward models -- to ensure precise alignment with the text instruction and high motion quality. This framework is supported by our meticulous data processing pipeline, which performs rigorous motion cleaning and captioning. Consequently, our model achieves the most extensive coverage, spanning over 200 motion categories across 6 major classes. We release HY-Motion 1.0 to the open-source community to foster future research and accelerate the transition of 3D human motion generation models towards commercial maturity.
Similar Papers
MultiMotion: Multi Subject Video Motion Transfer via Video Diffusion Transformer
CV and Pattern Recognition
Lets videos copy other videos' movements perfectly.
SimMotionEdit: Text-Based Human Motion Editing with Motion Similarity Prediction
CV and Pattern Recognition
Makes animated characters move like you describe.
EchoMotion: Unified Human Video and Motion Generation via Dual-Modality Diffusion Transformer
CV and Pattern Recognition
Makes videos of people move more realistically.