AnyMS: Bottom-up Attention Decoupling for Layout-guided and Training-free Multi-subject Customization
By: Binhe Yu , Zhen Wang , Kexin Li and more
Potential Business Impact:
Puts many things into one picture correctly.
Multi-subject customization aims to synthesize multiple user-specified subjects into a coherent image. To address issues such as subjects missing or conflicts, recent works incorporate layout guidance to provide explicit spatial constraints. However, existing methods still struggle to balance three critical objectives: text alignment, subject identity preservation, and layout control, while the reliance on additional training further limits their scalability and efficiency. In this paper, we present AnyMS, a novel training-free framework for layout-guided multi-subject customization. AnyMS leverages three input conditions: text prompt, subject images, and layout constraints, and introduces a bottom-up dual-level attention decoupling mechanism to harmonize their integration during generation. Specifically, global decoupling separates cross-attention between textual and visual conditions to ensure text alignment. Local decoupling confines each subject's attention to its designated area, which prevents subject conflicts and thus guarantees identity preservation and layout control. Moreover, AnyMS employs pre-trained image adapters to extract subject-specific features aligned with the diffusion model, removing the need for subject learning or adapter tuning. Extensive experiments demonstrate that AnyMS achieves state-of-the-art performance, supporting complex compositions and scaling to a larger number of subjects.
Similar Papers
A Training-Free Approach for Multi-ID Customization via Attention Adjustment and Spatial Control
CV and Pattern Recognition
Creates realistic pictures of many people together.
MUSE: Multi-Subject Unified Synthesis via Explicit Layout Semantic Expansion
CV and Pattern Recognition
Puts many things in pictures exactly where you want.
MOSAIC: Multi-Subject Personalized Generation via Correspondence-Aware Alignment and Disentanglement
CV and Pattern Recognition
Creates realistic pictures with many people.