MAG: Multi-Modal Aligned Autoregressive Co-Speech Gesture Generation without Vector Quantization
By: Binjie Liu , Lina Liu , Sanyi Zhang and more
Potential Business Impact:
Makes computer characters move hands naturally while talking.
This work focuses on full-body co-speech gesture generation. Existing methods typically employ an autoregressive model accompanied by vector-quantized tokens for gesture generation, which results in information loss and compromises the realism of the generated gestures. To address this, inspired by the natural continuity of real-world human motion, we propose MAG, a novel multi-modal aligned framework for high-quality and diverse co-speech gesture synthesis without relying on discrete tokenization. Specifically, (1) we introduce a motion-text-audio-aligned variational autoencoder (MTA-VAE), which leverages pre-trained WavCaps' text and audio embeddings to enhance both semantic and rhythmic alignment with motion, ultimately producing more realistic gestures. (2) Building on this, we propose a multimodal masked autoregressive model (MMAG) that enables autoregressive modeling in continuous motion embeddings through diffusion without vector quantization. To further ensure multi-modal consistency, MMAG incorporates a hybrid granularity audio-text fusion block, which serves as conditioning for diffusion process. Extensive experiments on two benchmark datasets demonstrate that MAG achieves stateof-the-art performance both quantitatively and qualitatively, producing highly realistic and diverse co-speech gestures.The code will be released to facilitate future research.
Similar Papers
M3G: Multi-Granular Gesture Generator for Audio-Driven Full-Body Human Motion Synthesis
Graphics
Makes avatars move realistically from sound.
MoSa: Motion Generation with Scalable Autoregressive Modeling
CV and Pattern Recognition
Makes computer-made people move more realistically.
Taming Teacher Forcing for Masked Autoregressive Video Generation
CV and Pattern Recognition
Makes computers create long, clear videos from few pictures.