A Foundation Model for Brain MRI with Dynamic Modality Integration
By: Minh Sao Khue Luu, Bair N. Tuchinov
Potential Business Impact:
Helps doctors see brain problems with fewer scans.
We present a foundation model for brain MRI that can work with different combinations of imaging sequences. The model uses one encoder with learnable modality embeddings, conditional layer normalization, and a masked autoencoding objective that accounts for missing modalities. A variance-covariance regularizer is applied to stabilize feature learning and improve representation diversity. This design removes the need for separate models for each modality and allows the network to adapt when some sequences are missing or unseen. It is trained on about 60,000 multi-center MRIs using self-supervised reconstruction and modality imputation to learn flexible representations. A learnable modality embedding guides feature extraction so the encoder can adjust to different inputs. We describe our planned evaluation on brain tumor and multiple sclerosis segmentation, as well as lesion classification, under various modality settings. Preliminary results show that the method works feasibly, and further experiments are planned to study its performance in more detail. All code and pretrained models are available at https://github.com/BrainFM/brainfm
Similar Papers
A Modality-agnostic Multi-task Foundation Model for Human Brain Imaging
CV and Pattern Recognition
Makes brain scans work better, no matter how they're taken.
Towards Generalisable Foundation Models for 3D Brain MRI
CV and Pattern Recognition
Helps doctors find brain problems from scans.
Bridging Foundation Models and Efficient Architectures: A Modular Brain Imaging Framework with Local Masking and Pretrained Representation Learning
Neurons and Cognition
Predicts age and smarts from brain scans.