PMMD: A pose-guided multi-view multi-modal diffusion for person generation
By: Ziyu Shang , Haoran Liu , Rongchao Zhang and more
Potential Business Impact:
Creates realistic people pictures from different views.
Generating consistent human images with controllable pose and appearance is essential for applications in virtual try on, image editing, and digital human creation. Current methods often suffer from occlusions, garment style drift, and pose misalignment. We propose Pose-guided Multi-view Multimodal Diffusion (PMMD), a diffusion framework that synthesizes photorealistic person images conditioned on multi-view references, pose maps, and text prompts. A multimodal encoder jointly models visual views, pose features, and semantic descriptions, which reduces cross modal discrepancy and improves identity fidelity. We further design a ResCVA module to enhance local detail while preserving global structure, and a cross modal fusion module that integrates image semantics with text throughout the denoising pipeline. Experiments on the DeepFashion MultiModal dataset show that PMMD outperforms representative baselines in consistency, detail preservation, and controllability. Project page and code are available at https://github.com/ZANMANGLOOPYE/PMMD.
Similar Papers
Jointly Conditioned Diffusion Model for Multi-View Pose-Guided Person Image Synthesis
CV and Pattern Recognition
Creates realistic people from different angles.
End-to-End Multi-Modal Diffusion Mamba
CV and Pattern Recognition
Makes computers understand pictures and words together better.
ViewMask-1-to-3: Multi-View Consistent Image Generation via Multimodal Diffusion Models
CV and Pattern Recognition
Creates many pictures of one thing from text.