Infinite-Story: A Training-Free Consistent Text-to-Image Generation
By: Jihun Park , Kyoungmin Lee , Jongmin Gim and more
Potential Business Impact:
Creates matching pictures for stories, super fast.
We present Infinite-Story, a training-free framework for consistent text-to-image (T2I) generation tailored for multi-prompt storytelling scenarios. Built upon a scale-wise autoregressive model, our method addresses two key challenges in consistent T2I generation: identity inconsistency and style inconsistency. To overcome these issues, we introduce three complementary techniques: Identity Prompt Replacement, which mitigates context bias in text encoders to align identity attributes across prompts; and a unified attention guidance mechanism comprising Adaptive Style Injection and Synchronized Guidance Adaptation, which jointly enforce global style and identity appearance consistency while preserving prompt fidelity. Unlike prior diffusion-based approaches that require fine-tuning or suffer from slow inference, Infinite-Story operates entirely at test time, delivering high identity and style consistency across diverse prompts. Extensive experiments demonstrate that our method achieves state-of-the-art generation performance, while offering over 6X faster inference (1.72 seconds per image) than the existing fastest consistent T2I models, highlighting its effectiveness and practicality for real-world visual storytelling.
Similar Papers
Consistent text-to-image generation via scene de-contextualization
CV and Pattern Recognition
Keeps people looking the same in different pictures.
Improving Text-to-Image Generation with Input-Side Inference-Time Scaling
Computation and Language
Makes computer pictures better from simple words.
Improving Text-to-Image Generation with Input-Side Inference-Time Scaling
Computation and Language
Makes computer pictures better from simple words.