SkinDualGen: Prompt-Driven Diffusion for Simultaneous Image-Mask Generation in Skin Lesions
By: Zhaobin Xu
Potential Business Impact:
Creates fake skin pictures to help doctors find sickness.
Medical image analysis plays a pivotal role in the early diagnosis of diseases such as skin lesions. However, the scarcity of data and the class imbalance significantly hinder the performance of deep learning models. We propose a novel method that leverages the pretrained Stable Diffusion-2.0 model to generate high-quality synthetic skin lesion images and corresponding segmentation masks. This approach augments training datasets for classification and segmentation tasks. We adapt Stable Diffusion-2.0 through domain-specific Low-Rank Adaptation (LoRA) fine-tuning and joint optimization of multi-objective loss functions, enabling the model to simultaneously generate clinically relevant images and segmentation masks conditioned on textual descriptions in a single step. Experimental results show that the generated images, validated by FID scores, closely resemble real images in quality. A hybrid dataset combining real and synthetic data markedly enhances the performance of classification and segmentation models, achieving substantial improvements in accuracy and F1-score of 8% to 15%, with additional positive gains in other key metrics such as the Dice coefficient and IoU. Our approach offers a scalable solution to address the challenges of medical imaging data, contributing to improved accuracy and reliability in diagnosing rare diseases.
Similar Papers
LesionGen: A Concept-Guided Diffusion Model for Dermatology Image Synthesis
Image and Video Processing
Creates realistic skin pictures for training doctors.
Noise-Consistent Siamese-Diffusion for Medical Image Synthesis and Segmentation
CV and Pattern Recognition
Makes medical scans clearer for better diagnoses.
CoSimGen: Controllable Diffusion Model for Simultaneous Image and Mask Generation
CV and Pattern Recognition
Creates realistic pictures and their matching outlines.