From Linearity to Non-Linearity: How Masked Autoencoders Capture Spatial Correlations
By: Anthony Bisulco , Rahul Ramesh , Randall Balestriero and more
Potential Business Impact:
Teaches computers to see better by hiding parts.
Masked Autoencoders (MAEs) have emerged as a powerful pretraining technique for vision foundation models. Despite their effectiveness, they require extensive hyperparameter tuning (masking ratio, patch size, encoder/decoder layers) when applied to novel datasets. While prior theoretical works have analyzed MAEs in terms of their attention patterns and hierarchical latent variable models, the connection between MAE hyperparameters and performance on downstream tasks is relatively unexplored. This work investigates how MAEs learn spatial correlations in the input image. We analytically derive the features learned by a linear MAE and show that masking ratio and patch size can be used to select for features that capture short- and long-range spatial correlations. We extend this analysis to non-linear MAEs to show that MAE representations adapt to spatial correlations in the dataset, beyond second-order statistics. Finally, we discuss some insights on how to select MAE hyper-parameters in practice.
Similar Papers
CoMA: Complementary Masking and Hierarchical Dynamic Multi-Window Self-Attention in a Unified Pre-training Framework
CV and Pattern Recognition
Teaches computers to see faster and better.
TerraMAE: Learning Spatial-Spectral Representations from Hyperspectral Earth Observation Data via Adaptive Masked Autoencoders
CV and Pattern Recognition
Helps satellites better see Earth's details.
Masked Autoencoders for Ultrasound Signals: Robust Representation Learning for Downstream Applications
Machine Learning (CS)
Teaches computers to understand sound waves better.