Few-Shot Deployment of Pretrained MRI Transformers in Brain Imaging Tasks
By: Mengyu Li , Guoyao Shen , Chad W. Farris and more
Potential Business Impact:
Helps doctors find brain problems with fewer scans.
Machine learning using transformers has shown great potential in medical imaging, but its real-world applicability remains limited due to the scarcity of annotated data. In this study, we propose a practical framework for the few-shot deployment of pretrained MRI transformers in diverse brain imaging tasks. By utilizing the Masked Autoencoder (MAE) pretraining strategy on a large-scale, multi-cohort brain MRI dataset comprising over 31 million slices, we obtain highly transferable latent representations that generalize well across tasks and datasets. For high-level tasks such as classification, a frozen MAE encoder combined with a lightweight linear head achieves state-of-the-art accuracy in MRI sequence identification with minimal supervision. For low-level tasks such as segmentation, we propose MAE-FUnet, a hybrid architecture that fuses multiscale CNN features with pretrained MAE embeddings. This model consistently outperforms other strong baselines in both skull stripping and multi-class anatomical segmentation under data-limited conditions. With extensive quantitative and qualitative evaluations, our framework demonstrates efficiency, stability, and scalability, suggesting its suitability for low-resource clinical environments and broader neuroimaging applications.
Similar Papers
MultiMAE for Brain MRIs: Robustness to Missing Inputs Using Multi-Modal Masked Autoencoder
CV and Pattern Recognition
Fixes missing brain scans for better medical pictures.
Masked Autoencoder Self Pre-Training for Defect Detection in Microelectronics
CV and Pattern Recognition
Finds tiny flaws in computer chips.
Masked Autoencoders for Ultrasound Signals: Robust Representation Learning for Downstream Applications
Machine Learning (CS)
Teaches computers to understand sound waves better.