Repurposing 2D Diffusion Models for 3D Shape Completion
By: Yao He , Youngjoong Kwon , Tiange Xiang and more
Potential Business Impact:
Fills in missing parts of 3D shapes.
We present a framework that adapts 2D diffusion models for 3D shape completion from incomplete point clouds. While text-to-image diffusion models have achieved remarkable success with abundant 2D data, 3D diffusion models lag due to the scarcity of high-quality 3D datasets and a persistent modality gap between 3D inputs and 2D latent spaces. To overcome these limitations, we introduce the Shape Atlas, a compact 2D representation of 3D geometry that (1) enables full utilization of the generative power of pretrained 2D diffusion models, and (2) aligns the modalities between the conditional input and output spaces, allowing more effective conditioning. This unified 2D formulation facilitates learning from limited 3D data and produces high-quality, detail-preserving shape completions. We validate the effectiveness of our results on the PCN and ShapeNet-55 datasets. Additionally, we show the downstream application of creating artist-created meshes from our completed point clouds, further demonstrating the practicality of our method.
Similar Papers
Repurposing 2D Diffusion Models with Gaussian Atlas for 3D Generation
CV and Pattern Recognition
Makes computers create 3D objects from text.
Point-Based Shape Representation Generation with a Correspondence-Preserving Diffusion Model
CV and Pattern Recognition
Creates 3D brain models with matching points.
KeyPointDiffuser: Unsupervised 3D Keypoint Learning via Latent Diffusion Models
CV and Pattern Recognition
Teaches computers to see and build 3D shapes.