MagicQuillV2: Precise and Interactive Image Editing with Layered Visual Cues
By: Zichen Liu , Yue Yu , Hao Ouyang and more
Potential Business Impact:
Lets you precisely change parts of AI-made pictures.
We propose MagicQuill V2, a novel system that introduces a \textbf{layered composition} paradigm to generative image editing, bridging the gap between the semantic power of diffusion models and the granular control of traditional graphics software. While diffusion transformers excel at holistic generation, their use of singular, monolithic prompts fails to disentangle distinct user intentions for content, position, and appearance. To overcome this, our method deconstructs creative intent into a stack of controllable visual cues: a content layer for what to create, a spatial layer for where to place it, a structural layer for how it is shaped, and a color layer for its palette. Our technical contributions include a specialized data generation pipeline for context-aware content integration, a unified control module to process all visual cues, and a fine-tuned spatial branch for precise local editing, including object removal. Extensive experiments validate that this layered approach effectively resolves the user intention gap, granting creators direct, intuitive control over the generative process.
Similar Papers
LayerComposer: Interactive Personalized T2I via Spatially-Aware Layered Canvas
CV and Pattern Recognition
Lets you easily put many things into one picture.
Object-level Visual Prompts for Compositional Image Generation
CV and Pattern Recognition
Lets you put specific pictures into new scenes.
SpotEdit: Evaluating Visually-Guided Image Editing Methods
CV and Pattern Recognition
Tests AI that edits pictures using words and eyes.