Multi-focal Conditioned Latent Diffusion for Person Image Synthesis
By: Jiaqi Liu , Jichao Zhang , Paolo Rota and more
Potential Business Impact:
Makes AI create realistic people pictures.
The Latent Diffusion Model (LDM) has demonstrated strong capabilities in high-resolution image generation and has been widely employed for Pose-Guided Person Image Synthesis (PGPIS), yielding promising results. However, the compression process of LDM often results in the deterioration of details, particularly in sensitive areas such as facial features and clothing textures. In this paper, we propose a Multi-focal Conditioned Latent Diffusion (MCLD) method to address these limitations by conditioning the model on disentangled, pose-invariant features from these sensitive regions. Our approach utilizes a multi-focal condition aggregation module, which effectively integrates facial identity and texture-specific information, enhancing the model's ability to produce appearance realistic and identity-consistent images. Our method demonstrates consistent identity and appearance generation on the DeepFashion dataset and enables flexible person image editing due to its generation consistency. The code is available at https://github.com/jqliu09/mcld.
Similar Papers
Prompt-Guided Latent Diffusion with Predictive Class Conditioning for 3D Prostate MRI Generation
Image and Video Processing
Makes doctors' notes create realistic body scans.
Boosting Generative Image Modeling via Joint Image-Feature Synthesis
CV and Pattern Recognition
Creates better pictures by understanding what they mean.
Jointly Conditioned Diffusion Model for Multi-View Pose-Guided Person Image Synthesis
CV and Pattern Recognition
Creates realistic people from different angles.