SynthFM: Training Modality-agnostic Foundation Models for Medical Image Segmentation without Real Medical Data
By: Sourya Sengupta , Satrajit Chakrabarty , Keerthi Sravan Ravi and more
Potential Business Impact:
Helps doctors find problems in medical pictures.
Foundation models like the Segment Anything Model (SAM) excel in zero-shot segmentation for natural images but struggle with medical image segmentation due to differences in texture, contrast, and noise. Annotating medical images is costly and requires domain expertise, limiting large-scale annotated data availability. To address this, we propose SynthFM, a synthetic data generation framework that mimics the complexities of medical images, enabling foundation models to adapt without real medical data. Using SAM's pretrained encoder and training the decoder from scratch on SynthFM's dataset, we evaluated our method on 11 anatomical structures across 9 datasets (CT, MRI, and Ultrasound). SynthFM outperformed zero-shot baselines like SAM and MedSAM, achieving superior results under different prompt settings and on out-of-distribution datasets.
Similar Papers
MedSAMix: A Training-Free Model Merging Approach for Medical Image Segmentation
CV and Pattern Recognition
Improves medical scans for better doctor diagnoses.
Adapting a Segmentation Foundation Model for Medical Image Classification
CV and Pattern Recognition
Helps doctors find sickness in body pictures.
A Modality-agnostic Multi-task Foundation Model for Human Brain Imaging
CV and Pattern Recognition
Makes brain scans work better, no matter how they're taken.