GeoSAM2: Unleashing the Power of SAM2 for 3D Part Segmentation
By: Ken Deng , Yunhan Yang , Jingxiang Sun and more
Potential Business Impact:
Helps computers understand object parts from different views.
We introduce GeoSAM2, a prompt-controllable framework for 3D part segmentation that casts the task as multi-view 2D mask prediction. Given a textureless object, we render normal and point maps from predefined viewpoints and accept simple 2D prompts - clicks or boxes - to guide part selection. These prompts are processed by a shared SAM2 backbone augmented with LoRA and residual geometry fusion, enabling view-specific reasoning while preserving pretrained priors. The predicted masks are back-projected to the object and aggregated across views. Our method enables fine-grained, part-specific control without requiring text prompts, per-shape optimization, or full 3D labels. In contrast to global clustering or scale-based methods, prompts are explicit, spatially grounded, and interpretable. We achieve state-of-the-art class-agnostic performance on PartObjaverse-Tiny and PartNetE, outperforming both slow optimization-based pipelines and fast but coarse feedforward approaches. Our results highlight a new paradigm: aligning the paradigm of 3D segmentation with SAM2, leveraging interactive 2D inputs to unlock controllability and precision in object-level part understanding.
Similar Papers
Evaluating SAM2 for Video Semantic Segmentation
CV and Pattern Recognition
Lets computers perfectly cut out any object in videos.
P3-SAM: Native 3D Part Segmentation
CV and Pattern Recognition
Breaks down 3D objects into parts automatically.
P3-SAM: Native 3D Part Segmentation
CV and Pattern Recognition
Breaks down 3D objects into parts automatically.