Atlas is Your Perfect Context: One-Shot Customization for Generalizable Foundational Medical Image Segmentation
By: Ziyu Zhang , Yi Yu , Simeng Zhu and more
Accurate medical image segmentation is essential for clinical diagnosis and treatment planning. While recent interactive foundation models (e.g., nnInteractive) enhance generalization through large-scale multimodal pretraining, they still depend on precise prompts and often perform below expectations in contexts that are underrepresented in their training data. We present AtlasSegFM, an atlas-guided framework that customizes available foundation models to clinical contexts with a single annotated example. The core innovations are: 1) a pipeline that provides context-aware prompts for foundation models via registration between a context atlas and query images, and 2) a test-time adapter to fuse predictions from both atlas registration and the foundation model. Extensive experiments across public and in-house datasets spanning multiple modalities and organs demonstrate that AtlasSegFM consistently improves segmentation, particularly for small, delicate structures. AtlasSegFM provides a lightweight, deployable solution one-shot customization of foundation models in real-world clinical workflows. The code will be made publicly available.
Similar Papers
Atlas: A Novel Pathology Foundation Model by Mayo Clinic, Charité, and Aignostics
CV and Pattern Recognition
Helps doctors find diseases in tiny pictures.
AtlasMorph: Learning conditional deformable templates for brain MRI
CV and Pattern Recognition
Creates better, personalized body maps for medical scans.
Feature Quality and Adaptability of Medical Foundation Models: A Comparative Evaluation for Radiographic Classification and Segmentation
CV and Pattern Recognition
Helps X-rays find sickness better, but not always.