Multi-modal brain MRI synthesis based on SwinUNETR
By: Haowen Pang, Weiyan Guo, Chuyang Ye
Potential Business Impact:
Creates missing MRI scans from existing ones.
Multi-modal brain magnetic resonance imaging (MRI) plays a crucial role in clinical diagnostics by providing complementary information across different imaging modalities. However, a common challenge in clinical practice is missing MRI modalities. In this paper, we apply SwinUNETR to the synthesize of missing modalities in brain MRI. SwinUNETR is a novel neural network architecture designed for medical image analysis, integrating the strengths of Swin Transformer and convolutional neural networks (CNNs). The Swin Transformer, a variant of the Vision Transformer (ViT), incorporates hierarchical feature extraction and window-based self-attention mechanisms, enabling it to capture both local and global contextual information effectively. By combining the Swin Transformer with CNNs, SwinUNETR merges global context awareness with detailed spatial resolution. This hybrid approach addresses the challenges posed by the varying modality characteristics and complex brain structures, facilitating the generation of accurate and realistic synthetic images. We evaluate the performance of SwinUNETR on brain MRI datasets and demonstrate its superior capability in generating clinically valuable images. Our results show significant improvements in image quality, anatomical consistency, and diagnostic value.
Similar Papers
Improving Prostate Gland Segmenting Using Transformer based Architectures
Image and Video Processing
Helps doctors find prostate cancer on scans.
Brain Hematoma Marker Recognition Using Multitask Learning: SwinTransformer and Swin-Unet
Machine Learning (CS)
Makes computer vision models more accurate.
Voxel-Level Brain States Prediction Using Swin Transformer
Neurons and Cognition
Predicts future brain activity from brain scans.