Edge-Aware Image Manipulation via Diffusion Models with a Novel Structure-Preservation Loss
By: Minsu Gong , Nuri Ryu , Jungseul Ok and more
Potential Business Impact:
Keeps picture edges sharp during edits.
Recent advances in image editing leverage latent diffusion models (LDMs) for versatile, text-prompt-driven edits across diverse tasks. Yet, maintaining pixel-level edge structures-crucial for tasks such as photorealistic style transfer or image tone adjustment-remains as a challenge for latent-diffusion-based editing. To overcome this limitation, we propose a novel Structure Preservation Loss (SPL) that leverages local linear models to quantify structural differences between input and edited images. Our training-free approach integrates SPL directly into the diffusion model's generative process to ensure structural fidelity. This core mechanism is complemented by a post-processing step to mitigate LDM decoding distortions, a masking strategy for precise edit localization, and a color preservation loss to preserve hues in unedited areas. Experiments confirm SPL enhances structural fidelity, delivering state-of-the-art performance in latent-diffusion-based image editing. Our code will be publicly released at https://github.com/gongms00/SPL.
Similar Papers
PixPerfect: Seamless Latent Diffusion Local Editing with Discriminative Pixel-Space Refinement
CV and Pattern Recognition
Fixes weird spots in edited pictures.
LAMS-Edit: Latent and Attention Mixing with Schedulers for Improved Content Preservation in Diffusion-Based Image and Style Editing
CV and Pattern Recognition
Changes pictures accurately by blending ideas.
3D-Consistent Multi-View Editing by Diffusion Guidance
CV and Pattern Recognition
Makes 3D pictures look right after editing.