Make Your MoVe: Make Your 3D Contents by Adapting Multi-View Diffusion Models to External Editing
By: Weitao Wang , Haoran Xu , Jun Meng and more
Potential Business Impact:
Edits 3D objects without messing up their shape.
As 3D generation techniques continue to flourish, the demand for generating personalized content is rapidly rising. Users increasingly seek to apply various editing methods to polish generated 3D content, aiming to enhance its color, style, and lighting without compromising the underlying geometry. However, most existing editing tools focus on the 2D domain, and directly feeding their results into 3D generation methods (like multi-view diffusion models) will introduce information loss, degrading the quality of the final 3D assets. In this paper, we propose a tuning-free, plug-and-play scheme that aligns edited assets with their original geometry in a single inference run. Central to our approach is a geometry preservation module that guides the edited multi-view generation with original input normal latents. Besides, an injection switcher is proposed to deliberately control the supervision extent of the original normals, ensuring the alignment between the edited color and normal views. Extensive experiments show that our method consistently improves both the multi-view consistency and mesh quality of edited 3D assets, across multiple combinations of multi-view diffusion models and editing methods.
Similar Papers
3D-Consistent Multi-View Editing by Diffusion Guidance
CV and Pattern Recognition
Makes 3D pictures look right after editing.
FROMAT: Multiview Material Appearance Transfer via Few-Shot Self-Attention Adaptation
CV and Pattern Recognition
Changes how things look in 3D pictures.
Coupled Diffusion Sampling for Training-Free Multi-View Image Editing
CV and Pattern Recognition
Edits pictures from many angles, all matching.