Generative Panoramic Image Stitching
By: Mathieu Tuli, Kaveh Kamali, David B. Lindell
Potential Business Impact:
Makes many photos blend into one perfect picture.
We introduce the task of generative panoramic image stitching, which aims to synthesize seamless panoramas that are faithful to the content of multiple reference images containing parallax effects and strong variations in lighting, camera capture settings, or style. In this challenging setting, traditional image stitching pipelines fail, producing outputs with ghosting and other artifacts. While recent generative models are capable of outpainting content consistent with multiple reference images, they fail when tasked with synthesizing large, coherent regions of a panorama. To address these limitations, we propose a method that fine-tunes a diffusion-based inpainting model to preserve a scene's content and layout based on multiple reference images. Once fine-tuned, the model outpaints a full panorama from a single reference image, producing a seamless and visually coherent result that faithfully integrates content from all reference images. Our approach significantly outperforms baselines for this task in terms of image quality and the consistency of image structure and scene layout when evaluated on captured datasets.
Similar Papers
JoPano: Unified Panorama Generation via Joint Modeling
CV and Pattern Recognition
Makes 360-degree pictures from words or other pictures.
PIS3R: Very Large Parallax Image Stitching via Deep 3D Reconstruction
CV and Pattern Recognition
Makes pictures with big differences look like one.
PanoDreamer: Consistent Text to 360-Degree Scene Generation
CV and Pattern Recognition
Creates 3D worlds from words and pictures.