Score: 0

Multi-identity Human Image Animation with Structural Video Diffusion

Published: April 5, 2025 | arXiv ID: 2504.04126v1

By: Zhenzhi Wang , Yixuan Li , Yanhong Zeng and more

Potential Business Impact:

Makes one picture show many people interacting.

Business Areas:
Image Recognition Data and Analytics, Software

Generating human videos from a single image while ensuring high visual quality and precise control is a challenging task, especially in complex scenarios involving multiple individuals and interactions with objects. Existing methods, while effective for single-human cases, often fail to handle the intricacies of multi-identity interactions because they struggle to associate the correct pairs of human appearance and pose condition and model the distribution of 3D-aware dynamics. To address these limitations, we present Structural Video Diffusion, a novel framework designed for generating realistic multi-human videos. Our approach introduces two core innovations: identity-specific embeddings to maintain consistent appearances across individuals and a structural learning mechanism that incorporates depth and surface-normal cues to model human-object interactions. Additionally, we expand existing human video dataset with 25K new videos featuring diverse multi-human and object interaction scenarios, providing a robust foundation for training. Experimental results demonstrate that Structural Video Diffusion achieves superior performance in generating lifelike, coherent videos for multiple subjects with dynamic and rich interactions, advancing the state of human-centric video generation.

Country of Origin
🇭🇰 Hong Kong

Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition