Point-Based Shape Representation Generation with a Correspondence-Preserving Diffusion Model
By: Shen Zhu , Yinzhu Jin , Ifrah Zawar and more
Potential Business Impact:
Creates 3D brain models with matching points.
We propose a diffusion model designed to generate point-based shape representations with correspondences. Traditional statistical shape models have considered point correspondences extensively, but current deep learning methods do not take them into account, focusing on unordered point clouds instead. Current deep generative models for point clouds do not address generating shapes with point correspondences between generated shapes. This work aims to formulate a diffusion model that is capable of generating realistic point-based shape representations, which preserve point correspondences that are present in the training data. Using shape representation data with correspondences derived from Open Access Series of Imaging Studies 3 (OASIS-3), we demonstrate that our correspondence-preserving model effectively generates point-based hippocampal shape representations that are highly realistic compared to existing methods. We further demonstrate the applications of our generative model by downstream tasks, such as conditional generation of healthy and AD subjects and predicting morphological changes of disease progression by counterfactual generation.
Similar Papers
Repurposing 2D Diffusion Models for 3D Shape Completion
CV and Pattern Recognition
Fills in missing parts of 3D shapes.
KeyPointDiffuser: Unsupervised 3D Keypoint Learning via Latent Diffusion Models
CV and Pattern Recognition
Teaches computers to see and build 3D shapes.
PointDico: Contrastive 3D Representation Learning Guided by Diffusion Models
CV and Pattern Recognition
Teaches computers to understand 3D shapes better.