GeoSAM2: Unleashing the Power of SAM2 for 3D Part Segmentation
By: Ken Deng , Yunhan Yang , Jingxiang Sun and more
Potential Business Impact:
Adds tiny details to 3D shapes quickly.
Modern 3D generation methods can rapidly create shapes from sparse or single views, but their outputs often lack geometric detail due to computational constraints. We present DetailGen3D, a generative approach specifically designed to enhance these generated 3D shapes. Our key insight is to model the coarse-to-fine transformation directly through data-dependent flows in latent space, avoiding the computational overhead of large-scale 3D generative models. We introduce a token matching strategy that ensures accurate spatial correspondence during refinement, enabling local detail synthesis while preserving global structure. By carefully designing our training data to match the characteristics of synthesized coarse shapes, our method can effectively enhance shapes produced by various 3D generation and reconstruction approaches, from single-view to sparse multi-view inputs. Extensive experiments demonstrate that DetailGen3D achieves high-fidelity geometric detail synthesis while maintaining efficiency in training.
Similar Papers
GeoSAM2: Unleashing the Power of SAM2 for 3D Part Segmentation
CV and Pattern Recognition
Helps computers understand object parts from different views.
SAM 3D: 3Dfy Anything in Images
CV and Pattern Recognition
Turns flat pictures into 3D objects.
S2AM3D: Scale-controllable Part Segmentation of 3D Point Cloud
CV and Pattern Recognition
Helps computers understand 3D shapes by parts.