Score: 2

Stochastic Siamese MAE Pretraining for Longitudinal Medical Images

Published: December 29, 2025 | arXiv ID: 2512.23441v1

By: Taha Emre , Arunava Chakravarty , Thomas Pinetz and more

Potential Business Impact:

Helps doctors predict disease changes over time.

Business Areas:
Image Recognition Data and Analytics, Software

Temporally aware image representations are crucial for capturing disease progression in 3D volumes of longitudinal medical datasets. However, recent state-of-the-art self-supervised learning approaches like Masked Autoencoding (MAE), despite their strong representation learning capabilities, lack temporal awareness. In this paper, we propose STAMP (Stochastic Temporal Autoencoder with Masked Pretraining), a Siamese MAE framework that encodes temporal information through a stochastic process by conditioning on the time difference between the 2 input volumes. Unlike deterministic Siamese approaches, which compare scans from different time points but fail to account for the inherent uncertainty in disease evolution, STAMP learns temporal dynamics stochastically by reframing the MAE reconstruction loss as a conditional variational inference objective. We evaluated STAMP on two OCT and one MRI datasets with multiple visits per patient. STAMP pretrained ViT models outperformed both existing temporal MAE methods and foundation models on different late stage Age-Related Macular Degeneration and Alzheimer's Disease progression prediction which require models to learn the underlying non-deterministic temporal dynamics of the diseases.

Repos / Data Links

Page Count
11 pages

Category
Computer Science:
Machine Learning (CS)