CoreEditor: Consistent 3D Editing via Correspondence-constrained Diffusion
By: Zhe Zhu , Honghua Chen , Peng Li and more
Potential Business Impact:
Changes 3D objects with words, keeping them clear.
Text-driven 3D editing seeks to modify 3D scenes according to textual descriptions, and most existing approaches tackle this by adapting pre-trained 2D image editors to multi-view inputs. However, without explicit control over multi-view information exchange, they often fail to maintain cross-view consistency, leading to insufficient edits and blurry details. We introduce CoreEditor, a novel framework for consistent text-to-3D editing. The key innovation is a correspondence-constrained attention mechanism that enforces precise interactions between pixels expected to remain consistent throughout the diffusion denoising process. Beyond relying solely on geometric alignment, we further incorporate semantic similarity estimated during denoising, enabling more reliable correspondence modeling and robust multi-view editing. In addition, we design a selective editing pipeline that allows users to choose preferred results from multiple candidates, offering greater flexibility and user control. Extensive experiments show that CoreEditor produces high-quality, 3D-consistent edits with sharper details, significantly outperforming prior methods.
Similar Papers
3D-Consistent Multi-View Editing by Diffusion Guidance
CV and Pattern Recognition
Makes 3D pictures look right after editing.
DisCo3D: Distilling Multi-View Consistency for 3D Scene Editing
CV and Pattern Recognition
Changes 3D objects in pictures perfectly.
Native 3D Editing with Full Attention
CV and Pattern Recognition
Changes 3D shapes with simple text commands.