BrainSegNet: A Novel Framework for Whole-Brain MRI Parcellation Enhanced by Large Models
By: Yucheng Li , Xiaofan Wang , Junyi Wang and more
Whole-brain parcellation from MRI is a critical yet challenging task due to the complexity of subdividing the brain into numerous small, irregular shaped regions. Traditionally, template-registration methods were used, but recent advances have shifted to deep learning for faster workflows. While large models like the Segment Anything Model (SAM) offer transferable feature representations, they are not tailored for the high precision required in brain parcellation. To address this, we propose BrainSegNet, a novel framework that adapts SAM for accurate whole-brain parcellation into 95 regions. We enhance SAM by integrating U-Net skip connections and specialized modules into its encoder and decoder, enabling fine-grained anatomical precision. Key components include a hybrid encoder combining U-Net skip connections with SAM's transformer blocks, a multi-scale attention decoder with pyramid pooling for varying-sized structures, and a boundary refinement module to sharpen edges. Experimental results on the Human Connectome Project (HCP) dataset demonstrate that BrainSegNet outperforms several state-of-the-art methods, achieving higher accuracy and robustness in complex, multi-label parcellation.
Similar Papers
SAMRI: Segment Anything Model for MRI
Image and Video Processing
Makes MRI scans faster and more accurate.
M-Net: MRI Brain Tumor Sequential Segmentation Network via Mesh-Cast
CV and Pattern Recognition
Helps doctors find tumors in brain scans better.
GBT-SAM: Adapting a Foundational Deep Learning Model for Generalizable Brain Tumor Segmentation via Efficient Integration of Multi-Parametric MRI Data
Image and Video Processing
Helps doctors find brain tumors faster.