GLAM: Geometry-Guided Local Alignment for Multi-View VLP in Mammography
By: Yuexi Du, Lihui Chen, Nicha C. Dvornek
Potential Business Impact:
Helps doctors spot breast cancer faster and better.
Mammography screening is an essential tool for early detection of breast cancer. The speed and accuracy of mammography interpretation have the potential to be improved with deep learning methods. However, the development of a foundation visual language model (VLM) is hindered by limited data and domain differences between natural and medical images. Existing mammography VLMs, adapted from natural images, often ignore domain-specific characteristics, such as multi-view relationships in mammography. Unlike radiologists who analyze both views together to process ipsilateral correspondence, current methods treat them as independent images or do not properly model the multi-view correspondence learning, losing critical geometric context and resulting in suboptimal prediction. We propose GLAM: Global and Local Alignment for Multi-view mammography for VLM pretraining using geometry guidance. By leveraging the prior knowledge about the multi-view imaging process of mammograms, our model learns local cross-view alignments and fine-grained local features through joint global and local, visual-visual, and visual-language contrastive learning. Pretrained on EMBED [14], one of the largest open mammography datasets, our model outperforms baselines across multiple datasets under different settings.
Similar Papers
MV-MLM: Bridging Multi-View Mammography and Language for Breast Cancer Diagnosis and Risk Prediction
CV and Pattern Recognition
Helps doctors find breast cancer faster.
Breast Cancer VLMs: Clinically Practical Vision-Language Train-Inference Models
CV and Pattern Recognition
Helps doctors find breast cancer earlier and better.
Retrieval-Augmented VLMs for Multimodal Melanoma Diagnosis
CV and Pattern Recognition
Helps doctors spot skin cancer faster and better.