Mamba-Based Modality Disentanglement Network for Multi-Contrast MRI Reconstruction
By: Weiyi Lyu , Xinming Fang , Jun Wang and more
Magnetic resonance imaging (MRI) is a cornerstone of modern clinical diagnosis, offering unparalleled soft-tissue contrast without ionizing radiation. However, prolonged scan times remain a major barrier to patient throughput and comfort. Existing accelerated MRI techniques often struggle with two key challenges: (1) failure to effectively utilize inherent K-space prior information, leading to persistent aliasing artifacts from zero-filled inputs; and (2) contamination of target reconstruction quality by irrelevant information when employing multi-contrast fusion strategies. To overcome these challenges, we present MambaMDN, a dual-domain framework for multi-contrast MRI reconstruction. Our approach first employs fully-sampled reference K-space data to complete the undersampled target data, generating structurally aligned but modality-mixed inputs. Subsequently, we develop a Mamba-based modality disentanglement network to extract and remove reference-specific features from the mixed representation. Furthermore, we introduce an iterative refinement mechanism to progressively enhance reconstruction accuracy through repeated feature purification. Extensive experiments demonstrate that MambaMDN can significantly outperform existing multi-contrast reconstruction methods.
Similar Papers
DH-Mamba: Exploring Dual-domain Hierarchical State Space Models for MRI Reconstruction
Image and Video Processing
Makes blurry MRI scans clear faster.
HybridMamba: A Dual-domain Mamba for 3D Medical Image Segmentation
CV and Pattern Recognition
Helps doctors see inside bodies better.
MMMamba: A Versatile Cross-Modal In Context Fusion Framework for Pan-Sharpening and Zero-Shot Image Enhancement
CV and Pattern Recognition
Makes blurry satellite pictures sharp and clear.