Visual Bias and Interpretability in Deep Learning for Dermatological Image Analysis
By: Enam Ahmed Taufik , Abdullah Khondoker , Antara Firoz Parsa and more
Potential Business Impact:
Helps computers spot skin problems from pictures.
Accurate skin disease classification is a critical yet challenging task due to high inter-class similarity, intra-class variability, and complex lesion textures. While deep learning-based computer-aided diagnosis (CAD) systems have shown promise in automating dermatological assessments, their performance is highly dependent on image pre-processing and model architecture. This study proposes a deep learning framework for multi-class skin disease classification, systematically evaluating three image pre-processing techniques: standard RGB, CMY color space transformation, and Contrast Limited Adaptive Histogram Equalization (CLAHE). We benchmark the performance of pre-trained convolutional neural networks (DenseNet201, Efficient-NetB5) and transformer-based models (ViT, Swin Transformer, DinoV2 Large) using accuracy and F1-score as evaluation metrics. Results show that DinoV2 with RGB pre-processing achieves the highest accuracy (up to 93%) and F1-scores across all variants. Grad-CAM visualizations applied to RGB inputs further reveal precise lesion localization, enhancing interpretability. These findings underscore the importance of effective pre-processing and model choice in building robust and explainable CAD systems for dermatology.
Similar Papers
Toward Accessible Dermatology: Skin Lesion Classification Using Deep Learning Models on Mobile-Acquired Images
CV and Pattern Recognition
Helps phones spot skin problems to help doctors.
Enhancing Fairness in Skin Lesion Classification for Medical Diagnosis Using Prune Learning
CV and Pattern Recognition
Helps doctors spot skin problems fairly on all skin.
XAI-Driven Skin Disease Classification: Leveraging GANs to Augment ResNet-50 Performance
CV and Pattern Recognition
Helps doctors spot skin diseases better and faster.