Scone: Bridging Composition and Distinction in Subject-Driven Image Generation via Unified Understanding-Generation Modeling
By: Yuran Wang , Bohan Zeng , Chengzhuo Tong and more
Potential Business Impact:
Creates pictures with many things, keeping them separate.
Subject-driven image generation has advanced from single- to multi-subject composition, while neglecting distinction, the ability to identify and generate the correct subject when inputs contain multiple candidates. This limitation restricts effectiveness in complex, realistic visual settings. We propose Scone, a unified understanding-generation method that integrates composition and distinction. Scone enables the understanding expert to act as a semantic bridge, conveying semantic information and guiding the generation expert to preserve subject identity while minimizing interference. A two-stage training scheme first learns composition, then enhances distinction through semantic alignment and attention-based masking. We also introduce SconeEval, a benchmark for evaluating both composition and distinction across diverse scenarios. Experiments demonstrate that Scone outperforms existing open-source models in composition and distinction tasks on two benchmarks. Our model, benchmark, and training data are available at: https://github.com/Ryann-Ran/Scone.
Similar Papers
Supervised Contrastive Learning for Few-Shot AI-Generated Image Detection and Attribution
CV and Pattern Recognition
Finds fake pictures made by AI.
Supervised Contrastive Learning for Few-Shot AI-Generated Image Detection and Attribution
CV and Pattern Recognition
Finds fake pictures made by AI.
OpenSubject: Leveraging Video-Derived Identity and Diversity Priors for Subject-driven Image Generation and Manipulation
CV and Pattern Recognition
Makes AI draw people better, even in crowds.