LayerComposer: Interactive Personalized T2I via Spatially-Aware Layered Canvas
By: Guocheng Gordon Qian , Ruihang Zhang , Tsai-Shien Chen and more
Potential Business Impact:
Lets you easily put many things into one picture.
Despite their impressive visual fidelity, existing personalized generative models lack interactive control over spatial composition and scale poorly to multiple subjects. To address these limitations, we present LayerComposer, an interactive framework for personalized, multi-subject text-to-image generation. Our approach introduces two main contributions: (1) a layered canvas, a novel representation in which each subject is placed on a distinct layer, enabling occlusion-free composition; and (2) a locking mechanism that preserves selected layers with high fidelity while allowing the remaining layers to adapt flexibly to the surrounding context. Similar to professional image-editing software, the proposed layered canvas allows users to place, resize, or lock input subjects through intuitive layer manipulation. Our versatile locking mechanism requires no architectural changes, relying instead on inherent positional embeddings combined with a new complementary data sampling strategy. Extensive experiments demonstrate that LayerComposer achieves superior spatial control and identity preservation compared to the state-of-the-art methods in multi-subject personalized image generation.
Similar Papers
MagicQuillV2: Precise and Interactive Image Editing with Layered Visual Cues
CV and Pattern Recognition
Lets you precisely change parts of AI-made pictures.
Canvas-to-Image: Compositional Image Generation with Multimodal Controls
CV and Pattern Recognition
Creates pictures from many instructions at once.
LayerCraft: Enhancing Text-to-Image Generation with CoT Reasoning and Layered Object Integration
Machine Learning (CS)
Makes AI create and edit pictures with more control.