Score: 0

UniAct: Unified Motion Generation and Action Streaming for Humanoid Robots

Published: December 30, 2025 | arXiv ID: 2512.24321v1

By: Nan Jiang , Zimo He , Wanhe Yu and more

Potential Business Impact:

Robots follow many kinds of commands instantly.

Business Areas:
Motion Capture Media and Entertainment, Video

A long-standing objective in humanoid robotics is the realization of versatile agents capable of following diverse multimodal instructions with human-level flexibility. Despite advances in humanoid control, bridging high-level multimodal perception with whole-body execution remains a significant bottleneck. Existing methods often struggle to translate heterogeneous instructions -- such as language, music, and trajectories -- into stable, real-time actions. Here we show that UniAct, a two-stage framework integrating a fine-tuned MLLM with a causal streaming pipeline, enables humanoid robots to execute multimodal instructions with sub-500 ms latency. By unifying inputs through a shared discrete codebook via FSQ, UniAct ensures cross-modal alignment while constraining motions to a physically grounded manifold. This approach yields a 19% improvement in the success rate of zero-shot tracking of imperfect reference motions. We validate UniAct on UniMoCap, our 20-hour humanoid motion benchmark, demonstrating robust generalization across diverse real-world scenarios. Our results mark a critical step toward responsive, general-purpose humanoid assistants capable of seamless interaction through unified perception and control.

Country of Origin
🇨🇳 China

Page Count
21 pages

Category
Computer Science:
CV and Pattern Recognition