OmniMoGen: Unifying Human Motion Generation via Learning from Interleaved Text-Motion Instructions
By: Wendong Bu , Kaihang Pan , Yuze Lin and more
Large language models (LLMs) have unified diverse linguistic tasks within a single framework, yet such unification remains unexplored in human motion generation. Existing methods are confined to isolated tasks, limiting flexibility for free-form and omni-objective generation. To address this, we propose OmniMoGen, a unified framework that enables versatile motion generation through interleaved text-motion instructions. Built upon a concise RVQ-VAE and transformer architecture, OmniMoGen supports end-to-end instruction-driven motion generation. We construct X2Mo, a large-scale dataset of over 137K interleaved text-motion instructions, and introduce AnyContext, a benchmark for evaluating interleaved motion generation. Experiments show that OmniMoGen achieves state-of-the-art performance on text-to-motion, motion editing, and AnyContext, exhibiting emerging capabilities such as compositional editing, self-reflective generation, and knowledge-informed generation. These results mark a step toward the next intelligent motion generation. Project Page: https://OmniMoGen.github.io/.
Similar Papers
OmniMotion-X: Versatile Multimodal Whole-Body Motion Generation
CV and Pattern Recognition
Makes characters move realistically from text or music.
X-MoGen: Unified Motion Generation across Humans and Animals
CV and Pattern Recognition
Makes computers create human and animal movements from words.
IRG-MotionLLM: Interleaving Motion Generation, Assessment and Refinement for Text-to-Motion Generation
CV and Pattern Recognition
Makes computer-made movements look more real.