Stochastic Siamese MAE Pretraining for Longitudinal Medical Images
By: Taha Emre , Arunava Chakravarty , Thomas Pinetz and more
Potential Business Impact:
Helps doctors predict disease changes over time.
Temporally aware image representations are crucial for capturing disease progression in 3D volumes of longitudinal medical datasets. However, recent state-of-the-art self-supervised learning approaches like Masked Autoencoding (MAE), despite their strong representation learning capabilities, lack temporal awareness. In this paper, we propose STAMP (Stochastic Temporal Autoencoder with Masked Pretraining), a Siamese MAE framework that encodes temporal information through a stochastic process by conditioning on the time difference between the 2 input volumes. Unlike deterministic Siamese approaches, which compare scans from different time points but fail to account for the inherent uncertainty in disease evolution, STAMP learns temporal dynamics stochastically by reframing the MAE reconstruction loss as a conditional variational inference objective. We evaluated STAMP on two OCT and one MRI datasets with multiple visits per patient. STAMP pretrained ViT models outperformed both existing temporal MAE methods and foundation models on different late stage Age-Related Macular Degeneration and Alzheimer's Disease progression prediction which require models to learn the underlying non-deterministic temporal dynamics of the diseases.
Similar Papers
Self Pre-training with Adaptive Mask Autoencoders for Variable-Contrast 3D Medical Imaging
Image and Video Processing
Helps doctors find strokes on brain scans better.
CoMA: Complementary Masking and Hierarchical Dynamic Multi-Window Self-Attention in a Unified Pre-training Framework
CV and Pattern Recognition
Teaches computers to see faster and better.
Structure is Supervision: Multiview Masked Autoencoders for Radiology
CV and Pattern Recognition
Helps doctors find diseases in X-rays better.