SE360: Semantic Edit in 360$^\circ$ Panoramas via Hierarchical Data Construction
By: Haoyi Zhong , Fang-Lue Zhang , Andrew Chalmers and more
Potential Business Impact:
Edits 360 photos with simple text instructions.
While instruction-based image editing is emerging, extending it to 360$^\circ$ panoramas introduces additional challenges. Existing methods often produce implausible results in both equirectangular projections (ERP) and perspective views. To address these limitations, we propose SE360, a novel framework for multi-condition guided object editing in 360$^\circ$ panoramas. At its core is a novel coarse-to-fine autonomous data generation pipeline without manual intervention. This pipeline leverages a Vision-Language Model (VLM) and adaptive projection adjustment for hierarchical analysis, ensuring the holistic segmentation of objects and their physical context. The resulting data pairs are both semantically meaningful and geometrically consistent, even when sourced from unlabeled panoramas. Furthermore, we introduce a cost-effective, two-stage data refinement strategy to improve data realism and mitigate model overfitting to erase artifacts. Based on the constructed dataset, we train a Transformer-based diffusion model to allow flexible object editing guided by text, mask, or reference image in 360$^\circ$ panoramas. Our experiments demonstrate that our method outperforms existing methods in both visual quality and semantic accuracy.
Similar Papers
DiT360: High-Fidelity Panoramic Image Generation via Hybrid Training
CV and Pattern Recognition
Makes 360 pictures look real and smooth.
Physically Aware 360$^\circ$ View Generation from a Single Image using Disentangled Scene Embeddings
CV and Pattern Recognition
Creates realistic 3D views from one picture.
Hallucinating 360°: Panoramic Street-View Generation via Local Scenes Diffusion and Probabilistic Prompting
CV and Pattern Recognition
Makes self-driving cars see all around them.