Metadata-Aligned 3D MRI Representations for Contrast Understanding and Quality Control
By: Mehmet Yigit Avci , Pedro Borges , Virginia Fernandez and more
Potential Business Impact:
Makes MRI scans understandable for computers.
Magnetic Resonance Imaging suffers from substantial data heterogeneity and the absence of standardized contrast labels across scanners, protocols, and institutions, which severely limits large-scale automated analysis. A unified representation of MRI contrast would enable a wide range of downstream utilities, from automatic sequence recognition to harmonization and quality control, without relying on manual annotations. To this end, we introduce MR-CLIP, a metadata-guided framework that learns MRI contrast representations by aligning volumetric images with their DICOM acquisition parameters. The resulting embeddings shows distinct clusters of MRI sequences and outperform supervised 3D baselines under data scarcity in few-shot sequence classification. Moreover, MR-CLIP enables unsupervised data quality control by identifying corrupted or inconsistent metadata through image-metadata embedding distances. By transforming routinely available acquisition metadata into a supervisory signal, MR-CLIP provides a scalable foundation for label-efficient MRI analysis across diverse clinical datasets.
Similar Papers
DIST-CLIP: Arbitrary Metadata and Image Guided MRI Harmonization via Disentangled Anatomy-Contrast Representations
CV and Pattern Recognition
Makes MRI scans look the same everywhere.
RegionMed-CLIP: A Region-Aware Multimodal Contrastive Learning Pre-trained Model for Medical Image Understanding
CV and Pattern Recognition
Helps doctors find sickness in medical pictures.
Revolutionizing Precise Low Back Pain Diagnosis via Contrastive Learning
CV and Pattern Recognition
Helps doctors find back pain from scans and words.