Efficient Universal Models for Medical Image Segmentation via Weakly Supervised In-Context Learning
By: Jiesi Hu , Yanwu Yang , Zhiyu Ye and more
Potential Business Impact:
Helps doctors find sickness in scans faster.
Universal models for medical image segmentation, such as interactive and in-context learning (ICL) models, offer strong generalization but require extensive annotations. Interactive models need repeated user prompts for each image, while ICL relies on dense, pixel-level labels. To address this, we propose Weakly Supervised In-Context Learning (WS-ICL), a new ICL paradigm that leverages weak prompts (e.g., bounding boxes or points) instead of dense labels for context. This approach significantly reduces annotation effort by eliminating the need for fine-grained masks and repeated user prompting for all images. We evaluated the proposed WS-ICL model on three held-out benchmarks. Experimental results demonstrate that WS-ICL achieves performance comparable to regular ICL models at a significantly lower annotation cost. In addition, WS-ICL is highly competitive even under the interactive paradigm. These findings establish WS-ICL as a promising step toward more efficient and unified universal models for medical image segmentation. Our code and model are publicly available at https://github.com/jiesihu/Weak-ICL.
Similar Papers
Efficient Universal Models for Medical Image Segmentation via Weakly Supervised In-Context Learning
CV and Pattern Recognition
Helps doctors find sickness in scans faster.
Towards Robust In-Context Learning for Medical Image Segmentation via Data Synthesis
CV and Pattern Recognition
Creates realistic fake medical pictures for training AI.
Cycle Context Verification for In-Context Medical Image Segmentation
CV and Pattern Recognition
Helps doctors see inside bodies better.