Towards Seamless Borders: A Method for Mitigating Inconsistencies in Image Inpainting and Outpainting
By: Xingzhong Hou , Jie Wu , Boxiao Liu and more
Potential Business Impact:
Fixes broken pictures by filling in missing parts.
Image inpainting is the task of reconstructing missing or damaged parts of an image in a way that seamlessly blends with the surrounding content. With the advent of advanced generative models, especially diffusion models and generative adversarial networks, inpainting has achieved remarkable improvements in visual quality and coherence. However, achieving seamless continuity remains a significant challenge. In this work, we propose two novel methods to address discrepancy issues in diffusion-based inpainting models. First, we introduce a modified Variational Autoencoder that corrects color imbalances, ensuring that the final inpainted results are free of color mismatches. Second, we propose a two-step training strategy that improves the blending of generated and existing image content during the diffusion process. Through extensive experiments, we demonstrate that our methods effectively reduce discontinuity and produce high-quality inpainting results that are coherent and visually appealing.
Similar Papers
ESDiff: Encoding Strategy-inspired Diffusion Model with Few-shot Learning for Color Image Inpainting
CV and Pattern Recognition
Fixes damaged pictures with less training data.
AdvPaint: Protecting Images from Inpainting Manipulation via Adversarial Attention Disruption
CV and Pattern Recognition
Stops fake pictures from replacing parts of photos.
PixelHacker: Image Inpainting with Structural and Semantic Consistency
CV and Pattern Recognition
Fixes missing parts of pictures perfectly.