PersonaLive! Expressive Portrait Image Animation for Live Streaming
By: Zhiyuan Li , Chi-Man Pun , Chen Fang and more
Current diffusion-based portrait animation models predominantly focus on enhancing visual quality and expression realism, while overlooking generation latency and real-time performance, which restricts their application range in the live streaming scenario. We propose PersonaLive, a novel diffusion-based framework towards streaming real-time portrait animation with multi-stage training recipes. Specifically, we first adopt hybrid implicit signals, namely implicit facial representations and 3D implicit keypoints, to achieve expressive image-level motion control. Then, a fewer-step appearance distillation strategy is proposed to eliminate appearance redundancy in the denoising process, greatly improving inference efficiency. Finally, we introduce an autoregressive micro-chunk streaming generation paradigm equipped with a sliding training strategy and a historical keyframe mechanism to enable low-latency and stable long-term video generation. Extensive experiments demonstrate that PersonaLive achieves state-of-the-art performance with up to 7-22x speedup over prior diffusion-based portrait animation models.
Similar Papers
FactorPortrait: Controllable Portrait Animation via Disentangled Expression, Pose, and Viewpoint
CV and Pattern Recognition
Makes still pictures move like real people.
Live Avatar: Streaming Real-time Audio-Driven Avatar Generation with Infinite Length
CV and Pattern Recognition
Makes talking avatars move instantly.
Live Avatar: Streaming Real-time Audio-Driven Avatar Generation with Infinite Length
CV and Pattern Recognition
Makes cartoon characters talk and move instantly.