Chimera: Compositional Image Generation using Part-based Concepting
By: Shivam Singh , Yiming Chen , Agneet Chatterjee and more
Potential Business Impact:
Combines parts from different pictures into new ones.
Personalized image generative models are highly proficient at synthesizing images from text or a single image, yet they lack explicit control for composing objects from specific parts of multiple source images without user specified masks or annotations. To address this, we introduce Chimera, a personalized image generation model that generates novel objects by combining specified parts from different source images according to textual instructions. To train our model, we first construct a dataset from a taxonomy built on 464 unique (part, subject) pairs, which we term semantic atoms. From this, we generate 37k prompts and synthesize the corresponding images with a high-fidelity text-to-image model. We train a custom diffusion prior model with part-conditional guidance, which steers the image-conditioning features to enforce both semantic identity and spatial layout. We also introduce an objective metric PartEval to assess the fidelity and compositional accuracy of generation pipelines. Human evaluations and our proposed metric show that Chimera outperforms other baselines by 14% in part alignment and compositional accuracy and 21% in visual quality.
Similar Papers
Chimera: Compositional Image Generation using Part-based Concepting
CV and Pattern Recognition
Combines image parts to make new pictures.
PartComposer: Learning and Composing Part-Level Concepts from Single-Image Examples
Graphics
Lets computers build new pictures from parts.
Object-level Visual Prompts for Compositional Image Generation
CV and Pattern Recognition
Lets you put specific pictures into new scenes.