ViBES: A Conversational Agent with Behaviorally-Intelligent 3D Virtual Body
By: Juze Zhang , Changan Chen , Xin Chen and more
Potential Business Impact:
Makes virtual people talk and move naturally.
Human communication is inherently multimodal and social: words, prosody, and body language jointly carry intent. Yet most prior systems model human behavior as a translation task co-speech gesture or text-to-motion that maps a fixed utterance to motion clips-without requiring agentic decision-making about when to move, what to do, or how to adapt across multi-turn dialogue. This leads to brittle timing, weak social grounding, and fragmented stacks where speech, text, and motion are trained or inferred in isolation. We introduce ViBES (Voice in Behavioral Expression and Synchrony), a conversational 3D agent that jointly plans language and movement and executes dialogue-conditioned body actions. Concretely, ViBES is a speech-language-behavior (SLB) model with a mixture-of-modality-experts (MoME) backbone: modality-partitioned transformer experts for speech, facial expression, and body motion. The model processes interleaved multimodal token streams with hard routing by modality (parameters are split per expert), while sharing information through cross-expert attention. By leveraging strong pretrained speech-language models, the agent supports mixed-initiative interaction: users can speak, type, or issue body-action directives mid-conversation, and the system exposes controllable behavior hooks for streaming responses. We further benchmark on multi-turn conversation with automatic metrics of dialogue-motion alignment and behavior quality, and observe consistent gains over strong co-speech and text-to-motion baselines. ViBES goes beyond "speech-conditioned motion generation" toward agentic virtual bodies where language, prosody, and movement are jointly generated, enabling controllable, socially competent 3D interaction. Code and data will be made available at: ai.stanford.edu/~juze/ViBES/
Similar Papers
Versatile Multimodal Controls for Expressive Talking Human Animation
CV and Pattern Recognition
Makes people in videos talk and move like you want.
ViMoNet: A Multimodal Vision-Language Framework for Human Behavior Understanding from Motion and Video
CV and Pattern Recognition
Helps computers understand what people are doing.
ViSA: 3D-Aware Video Shading for Real-Time Upper-Body Avatar Creation
CV and Pattern Recognition
Creates realistic 3D people from one picture.