Diffusion-Based Data Augmentation for Medical Image Segmentation
By: Maham Nazir, Muhammad Aqeel, Francesco Setti
Potential Business Impact:
Creates fake medical images to train doctors better.
Medical image segmentation models struggle with rare abnormalities due to scarce annotated pathological data. We propose DiffAug a novel framework that combines textguided diffusion-based generation with automatic segmentation validation to address this challenge. Our proposed approach uses latent diffusion models conditioned on medical text descriptions and spatial masks to synthesize abnormalities via inpainting on normal images. Generated samples undergo dynamic quality validation through a latentspace segmentation network that ensures accurate localization while enabling single-step inference. The text prompts, derived from medical literature, guide the generation of diverse abnormality types without requiring manual annotation. Our validation mechanism filters synthetic samples based on spatial accuracy, maintaining quality while operating efficiently through direct latent estimation. Evaluated on three medical imaging benchmarks (CVC-ClinicDB, Kvasir-SEG, REFUGE2), our framework achieves state-of-the-art performance with 8-10% Dice improvements over baselines and reduces false negative rates by up to 28% for challenging cases like small polyps and flat lesions critical for early detection in screening applications.
Similar Papers
Diffusion Model in Latent Space for Medical Image Segmentation Task
CV and Pattern Recognition
Helps doctors see uncertain details in medical scans.
TextDiffSeg: Text-guided Latent Diffusion Model for 3d Medical Images Segmentation
Image and Video Processing
Helps doctors see inside bodies better with words.
LesionDiffusion: Towards Text-controlled General Lesion Synthesis
Image and Video Processing
Creates fake medical scans to train doctors better.