Score: 1

Structure is Supervision: Multiview Masked Autoencoders for Radiology

Published: November 27, 2025 | arXiv ID: 2511.22294v1

By: Sonia Laguna , Andrea Agostini , Alain Ryser and more

Potential Business Impact:

Helps doctors find diseases in X-rays better.

Business Areas:
Image Recognition Data and Analytics, Software

Building robust medical machine learning systems requires pretraining strategies that exploit the intrinsic structure present in clinical data. We introduce Multiview Masked Autoencoder (MVMAE), a self-supervised framework that leverages the natural multi-view organization of radiology studies to learn view-invariant and disease-relevant representations. MVMAE combines masked image reconstruction with cross-view alignment, transforming clinical redundancy across projections into a powerful self-supervisory signal. We further extend this approach with MVMAE-V2T, which incorporates radiology reports as an auxiliary text-based learning signal to enhance semantic grounding while preserving fully vision-based inference. Evaluated on a downstream disease classification task on three large-scale public datasets, MIMIC-CXR, CheXpert, and PadChest, MVMAE consistently outperforms supervised and vision-language baselines. Furthermore, MVMAE-V2T provides additional gains, particularly in low-label regimes where structured textual supervision is most beneficial. Together, these results establish the importance of structural and textual supervision as complementary paths toward scalable, clinically grounded medical foundation models.

Country of Origin
🇨🇭 Switzerland

Repos / Data Links

Page Count
24 pages

Category
Computer Science:
CV and Pattern Recognition