UniSegDiff: Boosting Unified Lesion Segmentation via a Staged Diffusion Model
By: Yilong Hu , Shijie Chang , Lihe Zhang and more
Potential Business Impact:
Finds sickness in body scans better.
The Diffusion Probabilistic Model (DPM) has demonstrated remarkable performance across a variety of generative tasks. The inherent randomness in diffusion models helps address issues such as blurring at the edges of medical images and labels, positioning Diffusion Probabilistic Models (DPMs) as a promising approach for lesion segmentation. However, we find that the current training and inference strategies of diffusion models result in an uneven distribution of attention across different timesteps, leading to longer training times and suboptimal solutions. To this end, we propose UniSegDiff, a novel diffusion model framework designed to address lesion segmentation in a unified manner across multiple modalities and organs. This framework introduces a staged training and inference approach, dynamically adjusting the prediction targets at different stages, forcing the model to maintain high attention across all timesteps, and achieves unified lesion segmentation through pre-training the feature extraction network for segmentation. We evaluate performance on six different organs across various imaging modalities. Comprehensive experimental results demonstrate that UniSegDiff significantly outperforms previous state-of-the-art (SOTA) approaches. The code is available at https://github.com/HUYILONG-Z/UniSegDiff.
Similar Papers
TextDiffSeg: Text-guided Latent Diffusion Model for 3d Medical Images Segmentation
Image and Video Processing
Helps doctors see inside bodies better with words.
Semi-Supervised Biomedical Image Segmentation via Diffusion Models and Teacher-Student Co-Training
CV and Pattern Recognition
Helps doctors find sickness in scans with less data.
Computationally Efficient Diffusion Models in Medical Imaging: A Comprehensive Review
Image and Video Processing
Makes AI create amazing pictures faster.