Structure is Supervision: Multiview Masked Autoencoders for Radiology
By: Sonia Laguna , Andrea Agostini , Alain Ryser and more
Potential Business Impact:
Helps doctors find diseases in X-rays better.
Building robust medical machine learning systems requires pretraining strategies that exploit the intrinsic structure present in clinical data. We introduce Multiview Masked Autoencoder (MVMAE), a self-supervised framework that leverages the natural multi-view organization of radiology studies to learn view-invariant and disease-relevant representations. MVMAE combines masked image reconstruction with cross-view alignment, transforming clinical redundancy across projections into a powerful self-supervisory signal. We further extend this approach with MVMAE-V2T, which incorporates radiology reports as an auxiliary text-based learning signal to enhance semantic grounding while preserving fully vision-based inference. Evaluated on a downstream disease classification task on three large-scale public datasets, MIMIC-CXR, CheXpert, and PadChest, MVMAE consistently outperforms supervised and vision-language baselines. Furthermore, MVMAE-V2T provides additional gains, particularly in low-label regimes where structured textual supervision is most beneficial. Together, these results establish the importance of structural and textual supervision as complementary paths toward scalable, clinically grounded medical foundation models.
Similar Papers
MuM: Multi-View Masked Image Modeling for 3D Vision
CV and Pattern Recognition
Teaches computers to understand 3D from many pictures.
Masked Autoencoders for Ultrasound Signals: Robust Representation Learning for Downstream Applications
Machine Learning (CS)
Teaches computers to understand sound waves better.
CoMA: Complementary Masking and Hierarchical Dynamic Multi-Window Self-Attention in a Unified Pre-training Framework
CV and Pattern Recognition
Teaches computers to see faster and better.