StorySync: Training-Free Subject Consistency in Text-to-Image Generation via Region Harmonization
By: Gopalji Gaur, Mohammadreza Zolfaghari, Thomas Brox
Potential Business Impact:
Makes cartoon characters stay the same in stories.
Generating a coherent sequence of images that tells a visual story, using text-to-image diffusion models, often faces the critical challenge of maintaining subject consistency across all story scenes. Existing approaches, which typically rely on fine-tuning or retraining models, are computationally expensive, time-consuming, and often interfere with the model's pre-existing capabilities. In this paper, we follow a training-free approach and propose an efficient consistent-subject-generation method. This approach works seamlessly with pre-trained diffusion models by introducing masked cross-image attention sharing to dynamically align subject features across a batch of images, and Regional Feature Harmonization to refine visually similar details for improved subject consistency. Experimental results demonstrate that our approach successfully generates visually consistent subjects across a variety of scenarios while maintaining the creative abilities of the diffusion model.
Similar Papers
Storybooth: Training-free Multi-Subject Consistency for Improved Visual Storytelling
CV and Pattern Recognition
Makes AI draw the same people in different pictures.
Geometric Disentanglement of Text Embeddings for Subject-Consistent Text-to-Image Generation using A Single Prompt
CV and Pattern Recognition
Keeps characters the same in generated stories.
Infinite-Story: A Training-Free Consistent Text-to-Image Generation
CV and Pattern Recognition
Creates matching pictures for stories, super fast.