DiT360: High-Fidelity Panoramic Image Generation via Hybrid Training
By: Haoran Feng , Dizhe Zhang , Xiangtai Li and more
Potential Business Impact:
Makes 360 pictures look real and smooth.
In this work, we propose DiT360, a DiT-based framework that performs hybrid training on perspective and panoramic data for panoramic image generation. For the issues of maintaining geometric fidelity and photorealism in generation quality, we attribute the main reason to the lack of large-scale, high-quality, real-world panoramic data, where such a data-centric view differs from prior methods that focus on model design. Basically, DiT360 has several key modules for inter-domain transformation and intra-domain augmentation, applied at both the pre-VAE image level and the post-VAE token level. At the image level, we incorporate cross-domain knowledge through perspective image guidance and panoramic refinement, which enhance perceptual quality while regularizing diversity and photorealism. At the token level, hybrid supervision is applied across multiple modules, which include circular padding for boundary continuity, yaw loss for rotational robustness, and cube loss for distortion awareness. Extensive experiments on text-to-panorama, inpainting, and outpainting tasks demonstrate that our method achieves better boundary consistency and image fidelity across eleven quantitative metrics. Our code is available at https://github.com/Insta360-Research-Team/DiT360.
Similar Papers
Dual-Projection Fusion for Accurate Upright Panorama Generation in Robotic Vision
CV and Pattern Recognition
Makes robot pictures straight for better seeing.
One Flight Over the Gap: A Survey from Perspective to Panoramic Vision
CV and Pattern Recognition
Helps cameras see everything, everywhere, all at once.
Matrix-3D: Omnidirectional Explorable 3D World Generation
CV and Pattern Recognition
Creates 3D worlds from one picture or words.