Score: 0

PMMD: A pose-guided multi-view multi-modal diffusion for person generation

Published: December 17, 2025 | arXiv ID: 2512.15069v1

By: Ziyu Shang , Haoran Liu , Rongchao Zhang and more

Potential Business Impact:

Creates realistic people pictures from different views.

Business Areas:
Motion Capture Media and Entertainment, Video

Generating consistent human images with controllable pose and appearance is essential for applications in virtual try on, image editing, and digital human creation. Current methods often suffer from occlusions, garment style drift, and pose misalignment. We propose Pose-guided Multi-view Multimodal Diffusion (PMMD), a diffusion framework that synthesizes photorealistic person images conditioned on multi-view references, pose maps, and text prompts. A multimodal encoder jointly models visual views, pose features, and semantic descriptions, which reduces cross modal discrepancy and improves identity fidelity. We further design a ResCVA module to enhance local detail while preserving global structure, and a cross modal fusion module that integrates image semantics with text throughout the denoising pipeline. Experiments on the DeepFashion MultiModal dataset show that PMMD outperforms representative baselines in consistency, detail preservation, and controllability. Project page and code are available at https://github.com/ZANMANGLOOPYE/PMMD.

Page Count
5 pages

Category
Computer Science:
CV and Pattern Recognition