3DTeethSAM: Taming SAM2 for 3D Teeth Segmentation
By: Zhiguo Lu , Jianwen Lou , Mingjun Ma and more
Potential Business Impact:
Helps dentists perfectly map out teeth in 3D.
3D teeth segmentation, involving the localization of tooth instances and their semantic categorization in 3D dental models, is a critical yet challenging task in digital dentistry due to the complexity of real-world dentition. In this paper, we propose 3DTeethSAM, an adaptation of the Segment Anything Model 2 (SAM2) for 3D teeth segmentation. SAM2 is a pretrained foundation model for image and video segmentation, demonstrating a strong backbone in various downstream scenarios. To adapt SAM2 for 3D teeth data, we render images of 3D teeth models from predefined views, apply SAM2 for 2D segmentation, and reconstruct 3D results using 2D-3D projections. Since SAM2's performance depends on input prompts and its initial outputs often have deficiencies, and given its class-agnostic nature, we introduce three light-weight learnable modules: (1) a prompt embedding generator to derive prompt embeddings from image embeddings for accurate mask decoding, (2) a mask refiner to enhance SAM2's initial segmentation results, and (3) a mask classifier to categorize the generated masks. Additionally, we incorporate Deformable Global Attention Plugins (DGAP) into SAM2's image encoder. The DGAP enhances both the segmentation accuracy and the speed of the training process. Our method has been validated on the 3DTeethSeg benchmark, achieving an IoU of 91.90% on high-resolution 3D teeth meshes, establishing a new state-of-the-art in the field.
Similar Papers
MedSAM2: Segment Anything in 3D Medical Images and Videos
Image and Video Processing
Helps doctors see inside bodies better.
SAM2-3dMed: Empowering SAM2 for 3D Medical Image Segmentation
Image and Video Processing
Helps doctors see inside bodies better.
GeoSAM2: Unleashing the Power of SAM2 for 3D Part Segmentation
CV and Pattern Recognition
Helps computers understand object parts from different views.