Score: 0

PolyVivid: Vivid Multi-Subject Video Generation with Cross-Modal Interaction and Enhancement

Published: June 9, 2025 | arXiv ID: 2506.07848v1

By: Teng Hu , Zhentao Yu , Zhengguang Zhou and more

Potential Business Impact:

Makes videos with people that look and act the same.

Business Areas:
Video Editing Content and Publishing, Media and Entertainment, Video

Despite recent advances in video generation, existing models still lack fine-grained controllability, especially for multi-subject customization with consistent identity and interaction. In this paper, we propose PolyVivid, a multi-subject video customization framework that enables flexible and identity-consistent generation. To establish accurate correspondences between subject images and textual entities, we design a VLLM-based text-image fusion module that embeds visual identities into the textual space for precise grounding. To further enhance identity preservation and subject interaction, we propose a 3D-RoPE-based enhancement module that enables structured bidirectional fusion between text and image embeddings. Moreover, we develop an attention-inherited identity injection module to effectively inject fused identity features into the video generation process, mitigating identity drift. Finally, we construct an MLLM-based data pipeline that combines MLLM-based grounding, segmentation, and a clique-based subject consolidation strategy to produce high-quality multi-subject data, effectively enhancing subject distinction and reducing ambiguity in downstream video generation. Extensive experiments demonstrate that PolyVivid achieves superior performance in identity fidelity, video realism, and subject alignment, outperforming existing open-source and commercial baselines.

Page Count
20 pages

Category
Computer Science:
CV and Pattern Recognition