From Inpainting to Layer Decomposition: Repurposing Generative Inpainting Models for Image Layer Decomposition
By: Jingxi Chen , Yixiao Zhang , Xiaoye Qian and more
Potential Business Impact:
Lets you edit parts of a picture separately.
Images can be viewed as layered compositions, foreground objects over background, with potential occlusions. This layered representation enables independent editing of elements, offering greater flexibility for content creation. Despite the progress in large generative models, decomposing a single image into layers remains challenging due to limited methods and data. We observe a strong connection between layer decomposition and in/outpainting tasks, and propose adapting a diffusion-based inpainting model for layer decomposition using lightweight finetuning. To further preserve detail in the latent space, we introduce a novel multi-modal context fusion module with linear attention complexity. Our model is trained purely on a synthetic dataset constructed from open-source assets and achieves superior performance in object removal and occlusion recovery, unlocking new possibilities in downstream editing and creative applications.
Similar Papers
Texture-aware Intrinsic Image Decomposition with Model- and Learning-based Priors
CV and Pattern Recognition
Separates object colors from lighting in photos.
Leveraging Depth Maps and Attention Mechanisms for Enhanced Image Inpainting
Image and Video Processing
Fixes missing parts of pictures better.
Semantic-Guided Two-Stage GAN for Face Inpainting with Hybrid Perceptual Encoding
CV and Pattern Recognition
Fixes damaged photos by adding realistic missing face parts.