CDPDNet: Integrating Text Guidance with Hybrid Vision Encoders for Medical Image Segmentation
By: Jiong Wu , Yang Xing , Boxiao Yu and more
Potential Business Impact:
Helps doctors see tumors better in X-rays.
Most publicly available medical segmentation datasets are only partially labeled, with annotations provided for a subset of anatomical structures. When multiple datasets are combined for training, this incomplete annotation poses challenges, as it limits the model's ability to learn shared anatomical representations among datasets. Furthermore, vision-only frameworks often fail to capture complex anatomical relationships and task-specific distinctions, leading to reduced segmentation accuracy and poor generalizability to unseen datasets. In this study, we proposed a novel CLIP-DINO Prompt-Driven Segmentation Network (CDPDNet), which combined a self-supervised vision transformer with CLIP-based text embedding and introduced task-specific text prompts to tackle these challenges. Specifically, the framework was constructed upon a convolutional neural network (CNN) and incorporated DINOv2 to extract both fine-grained and global visual features, which were then fused using a multi-head cross-attention module to overcome the limited long-range modeling capability of CNNs. In addition, CLIP-derived text embeddings were projected into the visual space to help model complex relationships among organs and tumors. To further address the partial label challenge and enhance inter-task discriminative capability, a Text-based Task Prompt Generation (TTPG) module that generated task-specific prompts was designed to guide the segmentation. Extensive experiments on multiple medical imaging datasets demonstrated that CDPDNet consistently outperformed existing state-of-the-art segmentation methods. Code and pretrained model are available at: https://github.com/wujiong-hub/CDPDNet.git.
Similar Papers
Text-guided Visual Prompt DINO for Generic Segmentation
CV and Pattern Recognition
Lets computers see and name anything in pictures.
DpDNet: An Dual-Prompt-Driven Network for Universal PET-CT Segmentation
Image and Video Processing
Helps doctors find cancer better using special computer vision.
Towards Universal Text-driven CT Image Segmentation
CV and Pattern Recognition
Lets doctors find body parts in scans using words.