DEAP-3DSAM: Decoder Enhanced and Auto Prompt SAM for 3D Medical Image Segmentation
By: Fangda Chen , Jintao Tang , Pancheng Wang and more
Potential Business Impact:
Helps doctors find tumors in 3D scans better.
The Segment Anything Model (SAM) has recently demonstrated significant potential in medical image segmentation. Although SAM is primarily trained on 2D images, attempts have been made to apply it to 3D medical image segmentation. However, the pseudo 3D processing used to adapt SAM results in spatial feature loss, limiting its performance. Additionally, most SAM-based methods still rely on manual prompts, which are challenging to implement in real-world scenarios and require extensive external expert knowledge. To address these limitations, we introduce the Decoder Enhanced and Auto Prompt SAM (DEAP-3DSAM) to tackle these limitations. Specifically, we propose a Feature Enhanced Decoder that fuses the original image features with rich and detailed spatial information to enhance spatial features. We also design a Dual Attention Prompter to automatically obtain prompt information through Spatial Attention and Channel Attention. We conduct comprehensive experiments on four public abdominal tumor segmentation datasets. The results indicate that our DEAP-3DSAM achieves state-of-the-art performance in 3D image segmentation, outperforming or matching existing manual prompt methods. Furthermore, both quantitative and qualitative ablation studies confirm the effectiveness of our proposed modules.
Similar Papers
MedSAM3: Delving into Segment Anything with Medical Concepts
CV and Pattern Recognition
Lets doctors find body parts in scans with words.
RadSAM: Segmenting 3D radiological images with a 2D promptable model
CV and Pattern Recognition
Helps doctors see inside bodies faster.
SAM2-3dMed: Empowering SAM2 for 3D Medical Image Segmentation
Image and Video Processing
Helps doctors see inside bodies better.