An Augmentation Overlap Theory of Contrastive Learning
By: Qi Zhang, Yifei Wang, Yisen Wang
Potential Business Impact:
Teaches computers to group similar things without labels.
Recently, self-supervised contrastive learning has achieved great success on various tasks. However, its underlying working mechanism is yet unclear. In this paper, we first provide the tightest bounds based on the widely adopted assumption of conditional independence. Further, we relax the conditional independence assumption to a more practical assumption of augmentation overlap and derive the asymptotically closed bounds for the downstream performance. Our proposed augmentation overlap theory hinges on the insight that the support of different intra-class samples will become more overlapped under aggressive data augmentations, thus simply aligning the positive samples (augmented views of the same sample) could make contrastive learning cluster intra-class samples together. Moreover, from the newly derived augmentation overlap perspective, we develop an unsupervised metric for the representation evaluation of contrastive learning, which aligns well with the downstream performance almost without relying on additional modules. Code is available at https://github.com/PKU-ML/GARC.
Similar Papers
Contrastive Self-Supervised Network Intrusion Detection using Augmented Negative Pairs
Machine Learning (CS)
Finds computer attacks better by learning normal.
Revisiting Theory of Contrastive Learning for Domain Generalization
Machine Learning (Stat)
Helps computers learn from new, unseen data.
A Statistical Theory of Contrastive Learning via Approximate Sufficient Statistics
Machine Learning (Stat)
Teaches computers to learn from messy, unlabeled pictures.