Score: 0

Comparison of ConvNeXt and Vision-Language Models for Breast Density Assessment in Screening Mammography

Published: June 16, 2025 | arXiv ID: 2506.13964v1

By: Yusdivia Molina-Román , David Gómez-Ortiz , Ernestina Menasalvas-Ruiz and more

Potential Business Impact:

Helps doctors find breast cancer faster.

Business Areas:
Image Recognition Data and Analytics, Software

Mammographic breast density classification is essential for cancer risk assessment but remains challenging due to subjective interpretation and inter-observer variability. This study compares multimodal and CNN-based methods for automated classification using the BI-RADS system, evaluating BioMedCLIP and ConvNeXt across three learning scenarios: zero-shot classification, linear probing with textual descriptions, and fine-tuning with numerical labels. Results show that zero-shot classification achieved modest performance, while the fine-tuned ConvNeXt model outperformed the BioMedCLIP linear probe. Although linear probing demonstrated potential with pretrained embeddings, it was less effective than full fine-tuning. These findings suggest that despite the promise of multimodal learning, CNN-based models with end-to-end fine-tuning provide stronger performance for specialized medical imaging. The study underscores the need for more detailed textual representations and domain-specific adaptations in future radiology applications.

Country of Origin
🇪🇸 Spain

Page Count
6 pages

Category
Electrical Engineering and Systems Science:
Image and Video Processing