UniAct: Unified Motion Generation and Action Streaming for Humanoid Robots
By: Nan Jiang , Zimo He , Wanhe Yu and more
Potential Business Impact:
Robots follow many kinds of commands instantly.
A long-standing objective in humanoid robotics is the realization of versatile agents capable of following diverse multimodal instructions with human-level flexibility. Despite advances in humanoid control, bridging high-level multimodal perception with whole-body execution remains a significant bottleneck. Existing methods often struggle to translate heterogeneous instructions -- such as language, music, and trajectories -- into stable, real-time actions. Here we show that UniAct, a two-stage framework integrating a fine-tuned MLLM with a causal streaming pipeline, enables humanoid robots to execute multimodal instructions with sub-500 ms latency. By unifying inputs through a shared discrete codebook via FSQ, UniAct ensures cross-modal alignment while constraining motions to a physically grounded manifold. This approach yields a 19% improvement in the success rate of zero-shot tracking of imperfect reference motions. We validate UniAct on UniMoCap, our 20-hour humanoid motion benchmark, demonstrating robust generalization across diverse real-world scenarios. Our results mark a critical step toward responsive, general-purpose humanoid assistants capable of seamless interaction through unified perception and control.
Similar Papers
UniTracker: Learning Universal Whole-Body Motion Tracker for Humanoid Robots
Robotics
Robots can copy human movements better.
Uni-Inter: Unifying 3D Human Motion Synthesis Across Diverse Interaction Contexts
CV and Pattern Recognition
Makes computer characters move realistically together.
MM-ACT: Learn from Multimodal Parallel Generation to Act
CV and Pattern Recognition
Robots learn to do tasks by seeing, reading, and acting.