KerJEPA: Kernel Discrepancies for Euclidean Self-Supervised Learning
By: Eric Zimmermann , Harley Wiltzer , Justin Szeto and more
Recent breakthroughs in self-supervised Joint-Embedding Predictive Architectures (JEPAs) have established that regularizing Euclidean representations toward isotropic Gaussian priors yields provable gains in training stability and downstream generalization. We introduce a new, flexible family of KerJEPAs, self-supervised learning algorithms with kernel-based regularizers. One instance of this family corresponds to the recently-introduced LeJEPA Epps-Pulley regularizer which approximates a sliced maximum mean discrepancy (MMD) with a Gaussian prior and Gaussian kernel. By expanding the class of viable kernels and priors and computing the closed-form high-dimensional limit of sliced MMDs, we develop alternative KerJEPAs with a number of favorable properties including improved training stability and design flexibility.
Similar Papers
LeJEPA: Provable and Scalable Self-Supervised Learning Without the Heuristics
Machine Learning (CS)
Teaches AI to learn from the world better.
LeJEPA: Provable and Scalable Self-Supervised Learning Without the Heuristics
Machine Learning (CS)
Teaches AI to learn from the world better.
Koopman Invariants as Drivers of Emergent Time-Series Clustering in Joint-Embedding Predictive Architectures
Machine Learning (CS)
Helps AI understand patterns in changing data.