DeX-Portrait: Disentangled and Expressive Portrait Animation via Explicit and Latent Motion Representations
By: Yuxiang Shi , Zhe Li , Yanwen Wang and more
Potential Business Impact:
Makes faces move realistically with different emotions.
Portrait animation from a single source image and a driving video is a long-standing problem. Recent approaches tend to adopt diffusion-based image/video generation models for realistic and expressive animation. However, none of these diffusion models realizes high-fidelity disentangled control between the head pose and facial expression, hindering applications like expression-only or pose-only editing and animation. To address this, we propose DeX-Portrait, a novel approach capable of generating expressive portrait animation driven by disentangled pose and expression signals. Specifically, we represent the pose as an explicit global transformation and the expression as an implicit latent code. First, we design a powerful motion trainer to learn both pose and expression encoders for extracting precise and decomposed driving signals. Then we propose to inject the pose transformation into the diffusion model through a dual-branch conditioning mechanism, and the expression latent through cross attention. Finally, we design a progressive hybrid classifier-free guidance for more faithful identity consistency. Experiments show that our method outperforms state-of-the-art baselines on both animation quality and disentangled controllability.
Similar Papers
FactorPortrait: Controllable Portrait Animation via Disentangled Expression, Pose, and Viewpoint
CV and Pattern Recognition
Makes still pictures move like real people.
X-UniMotion: Animating Human Images with Expressive, Unified and Identity-Agnostic Motion Latents
CV and Pattern Recognition
Makes one person's movements copy another's.
Stable Video-Driven Portraits
CV and Pattern Recognition
Makes still pictures talk and move like real people.