Score: 3

An Augmentation Overlap Theory of Contrastive Learning

Published: November 5, 2025 | arXiv ID: 2511.03114v1

By: Qi Zhang, Yifei Wang, Yisen Wang

BigTech Affiliations: Massachusetts Institute of Technology

Potential Business Impact:

Teaches computers to group similar things without labels.

Business Areas:
Augmented Reality Hardware, Software

Recently, self-supervised contrastive learning has achieved great success on various tasks. However, its underlying working mechanism is yet unclear. In this paper, we first provide the tightest bounds based on the widely adopted assumption of conditional independence. Further, we relax the conditional independence assumption to a more practical assumption of augmentation overlap and derive the asymptotically closed bounds for the downstream performance. Our proposed augmentation overlap theory hinges on the insight that the support of different intra-class samples will become more overlapped under aggressive data augmentations, thus simply aligning the positive samples (augmented views of the same sample) could make contrastive learning cluster intra-class samples together. Moreover, from the newly derived augmentation overlap perspective, we develop an unsupervised metric for the representation evaluation of contrastive learning, which aligns well with the downstream performance almost without relying on additional modules. Code is available at https://github.com/PKU-ML/GARC.

Country of Origin
πŸ‡¨πŸ‡³ πŸ‡ΊπŸ‡Έ United States, China

Repos / Data Links

Page Count
42 pages

Category
Computer Science:
Machine Learning (CS)