Multi-identity Human Image Animation with Structural Video Diffusion
By: Zhenzhi Wang , Yixuan Li , Yanhong Zeng and more
Potential Business Impact:
Makes one picture show many people interacting.
Generating human videos from a single image while ensuring high visual quality and precise control is a challenging task, especially in complex scenarios involving multiple individuals and interactions with objects. Existing methods, while effective for single-human cases, often fail to handle the intricacies of multi-identity interactions because they struggle to associate the correct pairs of human appearance and pose condition and model the distribution of 3D-aware dynamics. To address these limitations, we present Structural Video Diffusion, a novel framework designed for generating realistic multi-human videos. Our approach introduces two core innovations: identity-specific embeddings to maintain consistent appearances across individuals and a structural learning mechanism that incorporates depth and surface-normal cues to model human-object interactions. Additionally, we expand existing human video dataset with 25K new videos featuring diverse multi-human and object interaction scenarios, providing a robust foundation for training. Experimental results demonstrate that Structural Video Diffusion achieves superior performance in generating lifelike, coherent videos for multiple subjects with dynamic and rich interactions, advancing the state of human-centric video generation.
Similar Papers
Animating the Uncaptured: Humanoid Mesh Animation with Video Diffusion Models
Graphics
Makes 3D characters move like real people.
SVAD: From Single Image to 3D Avatar via Synthetic Data Generation with Video Diffusion and Data Augmentation
CV and Pattern Recognition
Makes 3D avatars from one picture.
MVP4D: Multi-View Portrait Video Diffusion for Animatable 4D Avatars
CV and Pattern Recognition
Makes digital people move realistically from one photo.