Score: 0

Make Your MoVe: Make Your 3D Contents by Adapting Multi-View Diffusion Models to External Editing

Published: August 11, 2025 | arXiv ID: 2508.07700v1

By: Weitao Wang , Haoran Xu , Jun Meng and more

Potential Business Impact:

Edits 3D objects without messing up their shape.

As 3D generation techniques continue to flourish, the demand for generating personalized content is rapidly rising. Users increasingly seek to apply various editing methods to polish generated 3D content, aiming to enhance its color, style, and lighting without compromising the underlying geometry. However, most existing editing tools focus on the 2D domain, and directly feeding their results into 3D generation methods (like multi-view diffusion models) will introduce information loss, degrading the quality of the final 3D assets. In this paper, we propose a tuning-free, plug-and-play scheme that aligns edited assets with their original geometry in a single inference run. Central to our approach is a geometry preservation module that guides the edited multi-view generation with original input normal latents. Besides, an injection switcher is proposed to deliberately control the supervision extent of the original normals, ensuring the alignment between the edited color and normal views. Extensive experiments show that our method consistently improves both the multi-view consistency and mesh quality of edited 3D assets, across multiple combinations of multi-view diffusion models and editing methods.

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition