Score: 0

On the Similarities of Embeddings in Contrastive Learning

Published: June 11, 2025 | arXiv ID: 2506.09781v2

By: Chungpa Lee , Sehee Lim , Kibok Lee and more

Potential Business Impact:

Improves AI learning from less data.

Business Areas:
Semantic Search Internet Services

Contrastive learning operates on a simple yet effective principle: Embeddings of positive pairs are pulled together, while those of negative pairs are pushed apart. In this paper, we propose a unified framework for understanding contrastive learning through the lens of cosine similarity, and present two key theoretical insights derived from this framework. First, in full-batch settings, we show that perfect alignment of positive pairs is unattainable when negative-pair similarities fall below a threshold, and this misalignment can be mitigated by incorporating within-view negative pairs into the objective. Second, in mini-batch settings, smaller batch sizes induce stronger separation among negative pairs in the embedding space, i.e., higher variance in their similarities, which in turn degrades the quality of learned representations compared to full-batch settings. To address this, we propose an auxiliary loss that reduces the variance of negative-pair similarities in mini-batch settings. Empirical results show that incorporating the proposed loss improves performance in small-batch settings.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
31 pages

Category
Computer Science:
Machine Learning (CS)