Score: 1

A Theoretical Framework for Preventing Class Collapse in Supervised Contrastive Learning

Published: March 11, 2025 | arXiv ID: 2503.08203v1

By: Chungpa Lee , Jeongheon Oh , Kibok Lee and more

Potential Business Impact:

Teaches computers to tell similar things apart better.

Business Areas:
Semantic Search Internet Services

Supervised contrastive learning (SupCL) has emerged as a prominent approach in representation learning, leveraging both supervised and self-supervised losses. However, achieving an optimal balance between these losses is challenging; failing to do so can lead to class collapse, reducing discrimination among individual embeddings in the same class. In this paper, we present theoretically grounded guidelines for SupCL to prevent class collapse in learned representations. Specifically, we introduce the Simplex-to-Simplex Embedding Model (SSEM), a theoretical framework that models various embedding structures, including all embeddings that minimize the supervised contrastive loss. Through SSEM, we analyze how hyperparameters affect learned representations, offering practical guidelines for hyperparameter selection to mitigate the risk of class collapse. Our theoretical findings are supported by empirical results across synthetic and real-world datasets.

Country of Origin
🇰🇷 Korea, Republic of

Repos / Data Links

Page Count
38 pages

Category
Computer Science:
Machine Learning (CS)