Score: 0

StorySync: Training-Free Subject Consistency in Text-to-Image Generation via Region Harmonization

Published: July 31, 2025 | arXiv ID: 2508.03735v1

By: Gopalji Gaur, Mohammadreza Zolfaghari, Thomas Brox

Potential Business Impact:

Makes cartoon characters stay the same in stories.

Generating a coherent sequence of images that tells a visual story, using text-to-image diffusion models, often faces the critical challenge of maintaining subject consistency across all story scenes. Existing approaches, which typically rely on fine-tuning or retraining models, are computationally expensive, time-consuming, and often interfere with the model's pre-existing capabilities. In this paper, we follow a training-free approach and propose an efficient consistent-subject-generation method. This approach works seamlessly with pre-trained diffusion models by introducing masked cross-image attention sharing to dynamically align subject features across a batch of images, and Regional Feature Harmonization to refine visually similar details for improved subject consistency. Experimental results demonstrate that our approach successfully generates visually consistent subjects across a variety of scenarios while maintaining the creative abilities of the diffusion model.

Country of Origin
🇩🇪 Germany

Page Count
14 pages

Category
Computer Science:
CV and Pattern Recognition