A Modality-agnostic Multi-task Foundation Model for Human Brain Imaging
By: Peirong Liu , Oula Puonti , Xiaoling Hu and more
Potential Business Impact:
Makes brain scans work better, no matter how they're taken.
Recent learning-based approaches have made astonishing advances in calibrated medical imaging like computerized tomography (CT), yet they struggle to generalize in uncalibrated modalities -- notably magnetic resonance (MR) imaging, where performance is highly sensitive to the differences in MR contrast, resolution, and orientation. This prevents broad applicability to diverse real-world clinical protocols. Here we introduce BrainFM, a modality-agnostic, multi-task vision foundation model for human brain imaging. With the proposed "mild-to-severe" intra-subject generation and "real-synth" mix-up training strategy, BrainFM is resilient to the appearance of acquired images (e.g., modality, contrast, deformation, resolution, artifacts), and can be directly applied to five fundamental brain imaging tasks, including image synthesis for CT and T1w/T2w/FLAIR MRI, anatomy segmentation, scalp-to-cortical distance, bias field estimation, and registration. We evaluate the efficacy of BrainFM on eleven public datasets, and demonstrate its robustness and effectiveness across all tasks and input modalities. Code is available at https://github.com/jhuldr/BrainFM.
Similar Papers
Brain Imaging Foundation Models, Are We There Yet? A Systematic Review of Foundation Models for Brain Imaging and Biomedical Research
Image and Video Processing
Helps doctors understand brain scans better.
A Foundation Model for Brain MRI with Dynamic Modality Integration
CV and Pattern Recognition
Helps doctors see brain problems with fewer scans.
Foundation Models in Medical Image Analysis: A Systematic Review and Meta-Analysis
CV and Pattern Recognition
Helps doctors understand medical pictures better.