Class-N-Diff: Classification-Induced Diffusion Model Can Make Fair Skin Cancer Diagnosis
By: Nusrat Munia, Abdullah Imran
Potential Business Impact:
Makes AI create realistic skin disease pictures.
Generative models, especially Diffusion Models, have demonstrated remarkable capability in generating high-quality synthetic data, including medical images. However, traditional class-conditioned generative models often struggle to generate images that accurately represent specific medical categories, limiting their usefulness for applications such as skin cancer diagnosis. To address this problem, we propose a classification-induced diffusion model, namely, Class-N-Diff, to simultaneously generate and classify dermoscopic images. Our Class-N-Diff model integrates a classifier within a diffusion model to guide image generation based on its class conditions. Thus, the model has better control over class-conditioned image synthesis, resulting in more realistic and diverse images. Additionally, the classifier demonstrates improved performance, highlighting its effectiveness for downstream diagnostic tasks. This unique integration in our Class-N-Diff makes it a robust tool for enhancing the quality and utility of diffusion model-based synthetic dermoscopic image generation. Our code is available at https://github.com/Munia03/Class-N-Diff.
Similar Papers
DermDiff: Generative Diffusion Model for Mitigating Racial Biases in Dermatology Diagnosis
CV and Pattern Recognition
Makes AI better at spotting skin problems on all skin.
Diffusion models applied to skin and oral cancer classification
Image and Video Processing
Helps doctors spot skin and mouth cancer early.
Advancing Image Classification with Discrete Diffusion Classification Modeling
CV and Pattern Recognition
Helps computers guess pictures better, even when unsure.