Score: 0

3D-Consistent Multi-View Editing by Diffusion Guidance

Published: November 27, 2025 | arXiv ID: 2511.22228v1

By: Josef Bengtson , David Nilsson , Dong In Lee and more

Potential Business Impact:

Makes 3D pictures look right after editing.

Business Areas:
Image Recognition Data and Analytics, Software

Recent advancements in diffusion models have greatly improved text-based image editing, yet methods that edit images independently often produce geometrically and photometrically inconsistent results across different views of the same scene. Such inconsistencies are particularly problematic for editing of 3D representations such as NeRFs or Gaussian Splat models. We propose a training-free diffusion framework that enforces multi-view consistency during the image editing process. The key assumption is that corresponding points in the unedited images should undergo similar transformations after editing. To achieve this, we introduce a consistency loss that guides the diffusion sampling toward coherent edits. The framework is flexible and can be combined with widely varying image editing methods, supporting both dense and sparse multi-view editing setups. Experimental results show that our approach significantly improves 3D consistency compared to existing multi-view editing methods. We also show that this increased consistency enables high-quality Gaussian Splat editing with sharp details and strong fidelity to user-specified text prompts. Please refer to our project page for video results: https://3d-consistent-editing.github.io/

Country of Origin
πŸ‡ΈπŸ‡ͺ Sweden

Page Count
20 pages

Category
Computer Science:
CV and Pattern Recognition