CheX-DS: Improving Chest X-ray Image Classification with Ensemble Learning Based on DenseNet and Swin Transformer
By: Xinran Li , Yu Liu , Xiujuan Xu and more
Potential Business Impact:
Helps doctors find lung sicknesses on X-rays.
The automatic diagnosis of chest diseases is a popular and challenging task. Most current methods are based on convolutional neural networks (CNNs), which focus on local features while neglecting global features. Recently, self-attention mechanisms have been introduced into the field of computer vision, demonstrating superior performance. Therefore, this paper proposes an effective model, CheX-DS, for classifying long-tail multi-label data in the medical field of chest X-rays. The model is based on the excellent CNN model DenseNet for medical imaging and the newly popular Swin Transformer model, utilizing ensemble deep learning techniques to combine the two models and leverage the advantages of both CNNs and Transformers. The loss function of CheX-DS combines weighted binary cross-entropy loss with asymmetric loss, effectively addressing the issue of data imbalance. The NIH ChestX-ray14 dataset is selected to evaluate the model's effectiveness. The model outperforms previous studies with an excellent average AUC score of 83.76\%, demonstrating its superior performance.
Similar Papers
Reproducing and Improving CheXNet: Deep Learning for Chest X-ray Disease Classification
Image and Video Processing
Helps doctors find diseases on X-rays faster.
Automated diagnosis of lung diseases using vision transformer: a comparative study on chest x-ray classification
Image and Video Processing
Finds pneumonia on X-rays with 99% accuracy.
Chest X-ray Classification using Deep Convolution Models on Low-resolution images with Uncertain Labels
CV and Pattern Recognition
Finds sickness in blurry X-rays better.