UniLS: End-to-End Audio-Driven Avatars for Unified Listening and Speaking
By: Xuangeng Chu , Ruicong Liu , Yifei Huang and more
Potential Business Impact:
Makes talking robots look like they're really listening.
Generating lifelike conversational avatars requires modeling not just isolated speakers, but the dynamic, reciprocal interaction of speaking and listening. However, modeling the listener is exceptionally challenging: direct audio-driven training fails, producing stiff, static listening motions. This failure stems from a fundamental imbalance: the speaker's motion is strongly driven by speech audio, while the listener's motion primarily follows an internal motion prior and is only loosely guided by external speech. This challenge has led most methods to focus on speak-only generation. The only prior attempt at joint generation relies on extra speaker's motion to produce the listener. This design is not end-to-end, thereby hindering the real-time applicability. To address this limitation, we present UniLS, the first end-to-end framework for generating unified speak-listen expressions, driven by only dual-track audio. Our method introduces a novel two-stage training paradigm. Stage 1 first learns the internal motion prior by training an audio-free autoregressive generator, capturing the spontaneous dynamics of natural facial motion. Stage 2 then introduces the dual-track audio, fine-tuning the generator to modulate the learned motion prior based on external speech cues. Extensive evaluations show UniLS achieves state-of-the-art speaking accuracy. More importantly, it delivers up to 44.1\% improvement in listening metrics, generating significantly more diverse and natural listening expressions. This effectively mitigates the stiffness problem and provides a practical, high-fidelity audio-driven solution for interactive digital humans.
Similar Papers
Audio-Driven Universal Gaussian Head Avatars
CV and Pattern Recognition
Makes talking avatars look and sound real.
End-to-end Listen, Look, Speak and Act
Artificial Intelligence
Lets computers talk, see, hear, and act together.
UniVoice: Unifying Autoregressive ASR and Flow-Matching based TTS with Large Language Models
Audio and Speech Processing
Lets computers understand and speak like people.