Score: 1

GeoSAM2: Unleashing the Power of SAM2 for 3D Part Segmentation

Published: August 19, 2025 | arXiv ID: 2508.14036v2

By: Ken Deng , Yunhan Yang , Jingxiang Sun and more

Potential Business Impact:

Helps computers understand object parts from different views.

Business Areas:
Image Recognition Data and Analytics, Software

We introduce GeoSAM2, a prompt-controllable framework for 3D part segmentation that casts the task as multi-view 2D mask prediction. Given a textureless object, we render normal and point maps from predefined viewpoints and accept simple 2D prompts - clicks or boxes - to guide part selection. These prompts are processed by a shared SAM2 backbone augmented with LoRA and residual geometry fusion, enabling view-specific reasoning while preserving pretrained priors. The predicted masks are back-projected to the object and aggregated across views. Our method enables fine-grained, part-specific control without requiring text prompts, per-shape optimization, or full 3D labels. In contrast to global clustering or scale-based methods, prompts are explicit, spatially grounded, and interpretable. We achieve state-of-the-art class-agnostic performance on PartObjaverse-Tiny and PartNetE, outperforming both slow optimization-based pipelines and fast but coarse feedforward approaches. Our results highlight a new paradigm: aligning the paradigm of 3D segmentation with SAM2, leveraging interactive 2D inputs to unlock controllability and precision in object-level part understanding.

Page Count
17 pages

Category
Computer Science:
CV and Pattern Recognition