Match-and-Fuse: Consistent Generation from Unstructured Image Sets
By: Kate Feingold, Omri Kaduri, Tali Dekel
Potential Business Impact:
Creates many different pictures of the same thing.
We present Match-and-Fuse - a zero-shot, training-free method for consistent controlled generation of unstructured image sets - collections that share a common visual element, yet differ in viewpoint, time of capture, and surrounding content. Unlike existing methods that operate on individual images or densely sampled videos, our framework performs set-to-set generation: given a source set and user prompts, it produces a new set that preserves cross-image consistency of shared content. Our key idea is to model the task as a graph, where each node corresponds to an image and each edge triggers a joint generation of image pairs. This formulation consolidates all pairwise generations into a unified framework, enforcing their local consistency while ensuring global coherence across the entire set. This is achieved by fusing internal features across image pairs, guided by dense input correspondences, without requiring masks or manual supervision. It also allows us to leverage an emergent prior in text-to-image models that encourages coherent generation when multiple views share a single canvas. Match-and-Fuse achieves state-of-the-art consistency and visual quality, and unlocks new capabilities for content creation from image collections.
Similar Papers
FUSE: Unifying Spectral and Semantic Cues for Robust AI-Generated Image Detection
CV and Pattern Recognition
Finds fake pictures made by computers.
Geometry-Aware Scene-Consistent Image Generation
CV and Pattern Recognition
Adds objects to pictures while keeping scene real.
Towards Unified Semantic and Controllable Image Fusion: A Diffusion Transformer Approach
CV and Pattern Recognition
Combines pictures using words to make better images.