Score: 1

Multi-focal Conditioned Latent Diffusion for Person Image Synthesis

Published: March 19, 2025 | arXiv ID: 2503.15686v2

By: Jiaqi Liu , Jichao Zhang , Paolo Rota and more

Potential Business Impact:

Makes AI create realistic people pictures.

Business Areas:
Motion Capture Media and Entertainment, Video

The Latent Diffusion Model (LDM) has demonstrated strong capabilities in high-resolution image generation and has been widely employed for Pose-Guided Person Image Synthesis (PGPIS), yielding promising results. However, the compression process of LDM often results in the deterioration of details, particularly in sensitive areas such as facial features and clothing textures. In this paper, we propose a Multi-focal Conditioned Latent Diffusion (MCLD) method to address these limitations by conditioning the model on disentangled, pose-invariant features from these sensitive regions. Our approach utilizes a multi-focal condition aggregation module, which effectively integrates facial identity and texture-specific information, enhancing the model's ability to produce appearance realistic and identity-consistent images. Our method demonstrates consistent identity and appearance generation on the DeepFashion dataset and enables flexible person image editing due to its generation consistency. The code is available at https://github.com/jqliu09/mcld.

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
CV and Pattern Recognition