3D-LATTE: Latent Space 3D Editing from Textual Instructions
By: Maria Parelli , Michael Oechsle , Michael Niemeyer and more
Potential Business Impact:
Changes 3D objects with words, not just pictures.
Despite the recent success of multi-view diffusion models for text/image-based 3D asset generation, instruction-based editing of 3D assets lacks surprisingly far behind the quality of generation models. The main reason is that recent approaches using 2D priors suffer from view-inconsistent editing signals. Going beyond 2D prior distillation methods and multi-view editing strategies, we propose a training-free editing method that operates within the latent space of a native 3D diffusion model, allowing us to directly manipulate 3D geometry. We guide the edit synthesis by blending 3D attention maps from the generation with the source object. Coupled with geometry-aware regularization guidance, a spectral modulation strategy in the Fourier domain and a refinement step for 3D enhancement, our method outperforms previous 3D editing methods enabling high-fidelity, precise, and robust edits across a wide range of shapes and semantic manipulations.
Similar Papers
3D-LATTE: Latent Space 3D Editing from Textual Instructions
Graphics
Changes 3D shapes with text instructions.
LatentEdit: Adaptive Latent Control for Consistent Semantic Editing
Graphics
Changes pictures while keeping the background the same.
UniLat3D: Geometry-Appearance Unified Latents for Single-Stage 3D Generation
CV and Pattern Recognition
Makes 3D objects from one picture fast.