Comprehensive language-image pre-training for 3D medical image understanding
By: Tassilo Wald , Ibrahim Ethem Hamamci , Yuan Gao and more
Potential Business Impact:
Helps doctors find sickness in scans.
Vision-language pre-training, i.e., aligning images with paired text, is a powerful paradigm to create encoders that can be directly used for tasks such as classification and retrieval, and for downstream tasks such as segmentation and report generation. In the 3D medical image domain, these capabilities allow vision-language encoders (VLEs) to support radiologists by retrieving patients with similar abnormalities or predicting likelihoods of abnormality. While the methodology holds promise, data availability limits the capabilities of current 3D VLEs. In this paper, we alleviate the lack of data by injecting additional inductive biases: introducing a report generation objective and pairing vision-language pre-training with vision-only pre-training. This allows us to leverage both image-only and paired image-text 3D datasets, increasing the total amount of data to which our model is exposed. Through these additional inductive biases, paired with best practices of the 3D medical imaging domain, we develop the Comprehensive Language-image Pre-training (COLIPRI) encoder family. Our COLIPRI encoders achieve state-of-the-art performance in report generation, classification probing, and zero-shot classification, and remain competitive for semantic segmentation.
Similar Papers
VELVET-Med: Vision and Efficient Language Pre-training for Volumetric Imaging Tasks in Medicine
CV and Pattern Recognition
Helps doctors understand 3D scans better.
More performant and scalable: Rethinking contrastive vision-language pre-training of radiology in the LLM era
CV and Pattern Recognition
AI reads X-rays and reports for better medical AI.
SLIP: Structural-aware Language-Image Pretraining for Vision-Language Alignment
CV and Pattern Recognition
Teaches computers to understand pictures and words together.