MV-MLM: Bridging Multi-View Mammography and Language for Breast Cancer Diagnosis and Risk Prediction
By: Shunjie-Fabian Zheng , Hyeonjun Lee , Thijs Kooi and more
Potential Business Impact:
Helps doctors find breast cancer faster.
Large annotated datasets are essential for training robust Computer-Aided Diagnosis (CAD) models for breast cancer detection or risk prediction. However, acquiring such datasets with fine-detailed annotation is both costly and time-consuming. Vision-Language Models (VLMs), such as CLIP, which are pre-trained on large image-text pairs, offer a promising solution by enhancing robustness and data efficiency in medical imaging tasks. This paper introduces a novel Multi-View Mammography and Language Model for breast cancer classification and risk prediction, trained on a dataset of paired mammogram images and synthetic radiology reports. Our MV-MLM leverages multi-view supervision to learn rich representations from extensive radiology data by employing cross-modal self-supervision across image-text pairs. This includes multiple views and the corresponding pseudo-radiology reports. We propose a novel joint visual-textual learning strategy to enhance generalization and accuracy performance over different data types and tasks to distinguish breast tissues or cancer characteristics(calcification, mass) and utilize these patterns to understand mammography images and predict cancer risk. We evaluated our method on both private and publicly available datasets, demonstrating that the proposed model achieves state-of-the-art performance in three classification tasks: (1) malignancy classification, (2) subtype classification, and (3) image-based cancer risk prediction. Furthermore, the model exhibits strong data efficiency, outperforming existing fully supervised or VLM baselines while trained on synthetic text reports and without the need for actual radiology reports.
Similar Papers
Breast Cancer VLMs: Clinically Practical Vision-Language Train-Inference Models
CV and Pattern Recognition
Helps doctors find breast cancer earlier and better.
GLAM: Geometry-Guided Local Alignment for Multi-View VLP in Mammography
CV and Pattern Recognition
Helps doctors spot breast cancer faster and better.
Retrieval-Augmented VLMs for Multimodal Melanoma Diagnosis
CV and Pattern Recognition
Helps doctors spot skin cancer faster and better.