3D-Consistent Multi-View Editing by Diffusion Guidance
By: Josef Bengtson , David Nilsson , Dong In Lee and more
Potential Business Impact:
Makes 3D pictures look right after editing.
Recent advancements in diffusion models have greatly improved text-based image editing, yet methods that edit images independently often produce geometrically and photometrically inconsistent results across different views of the same scene. Such inconsistencies are particularly problematic for editing of 3D representations such as NeRFs or Gaussian Splat models. We propose a training-free diffusion framework that enforces multi-view consistency during the image editing process. The key assumption is that corresponding points in the unedited images should undergo similar transformations after editing. To achieve this, we introduce a consistency loss that guides the diffusion sampling toward coherent edits. The framework is flexible and can be combined with widely varying image editing methods, supporting both dense and sparse multi-view editing setups. Experimental results show that our approach significantly improves 3D consistency compared to existing multi-view editing methods. We also show that this increased consistency enables high-quality Gaussian Splat editing with sharp details and strong fidelity to user-specified text prompts. Please refer to our project page for video results: https://3d-consistent-editing.github.io/
Similar Papers
Coupled Diffusion Sampling for Training-Free Multi-View Image Editing
CV and Pattern Recognition
Edits pictures from many angles, all matching.
CoreEditor: Consistent 3D Editing via Correspondence-constrained Diffusion
CV and Pattern Recognition
Changes 3D objects with words, keeping them clear.
DisCo3D: Distilling Multi-View Consistency for 3D Scene Editing
CV and Pattern Recognition
Changes 3D objects in pictures perfectly.