iMontage: Unified, Versatile, Highly Dynamic Many-to-many Image Generation
By: Zhoujie Fu , Xianfang Zeng , Jinghong Lan and more
Potential Business Impact:
Creates amazing pictures from video ideas.
Pre-trained video models learn powerful priors for generating high-quality, temporally coherent content. While these models excel at temporal coherence, their dynamics are often constrained by the continuous nature of their training data. We hypothesize that by injecting the rich and unconstrained content diversity from image data into this coherent temporal framework, we can generate image sets that feature both natural transitions and a far more expansive dynamic range. To this end, we introduce iMontage, a unified framework designed to repurpose a powerful video model into an all-in-one image generator. The framework consumes and produces variable-length image sets, unifying a wide array of image generation and editing tasks. To achieve this, we propose an elegant and minimally invasive adaptation strategy, complemented by a tailored data curation process and training paradigm. This approach allows the model to acquire broad image manipulation capabilities without corrupting its invaluable original motion priors. iMontage excels across several mainstream many-in-many-out tasks, not only maintaining strong cross-image contextual consistency but also generating scenes with extraordinary dynamics that surpass conventional scopes. Find our homepage at: https://kr1sjfu.github.io/iMontage-web/.
Similar Papers
DreaMontage: Arbitrary Frame-Guided One-Shot Video Generation
CV and Pattern Recognition
Makes movie clips flow like one long shot.
UnityVideo: Unified Multi-Modal Multi-Task Learning for Enhancing World-Aware Video Generation
CV and Pattern Recognition
Makes videos understand the real world better.
MultiCOIN: Multi-Modal COntrollable Video INbetweening
CV and Pattern Recognition
Makes videos move exactly how you want.