Reproducing and Improving CheXNet: Deep Learning for Chest X-ray Disease Classification
By: Daniel Strick, Carlos Garcia, Anthony Huang
Potential Business Impact:
Helps doctors find diseases on X-rays faster.
Deep learning for radiologic image analysis is a rapidly growing field in biomedical research and is likely to become a standard practice in modern medicine. On the publicly available NIH ChestX-ray14 dataset, containing X-ray images that are classified by the presence or absence of 14 different diseases, we reproduced an algorithm known as CheXNet, as well as explored other algorithms that outperform CheXNet's baseline metrics. Model performance was primarily evaluated using the F1 score and AUC-ROC, both of which are critical metrics for imbalanced, multi-label classification tasks in medical imaging. The best model achieved an average AUC-ROC score of 0.85 and an average F1 score of 0.39 across all 14 disease classifications present in the dataset.
Similar Papers
CheX-DS: Improving Chest X-ray Image Classification with Ensemble Learning Based on DenseNet and Swin Transformer
CV and Pattern Recognition
Helps doctors find lung sicknesses on X-rays.
Chest X-ray Classification using Deep Convolution Models on Low-resolution images with Uncertain Labels
CV and Pattern Recognition
Finds sickness in blurry X-rays better.
Comparative Evaluation of Radiomics and Deep Learning Models for Disease Detection in Chest Radiography
Image and Video Processing
Helps doctors find lung sickness using AI.