Score: 0

Contrastive Network Representation Learning

Published: September 14, 2025 | arXiv ID: 2509.11316v1

By: Zihan Dong , Xin Zhou , Ryumei Nakada and more

Potential Business Impact:

Helps understand brain connections for better analysis.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Network representation learning seeks to embed networks into a low-dimensional space while preserving the structural and semantic properties, thereby facilitating downstream tasks such as classification, trait prediction, edge identification, and community detection. Motivated by challenges in brain connectivity data analysis that is characterized by subject-specific, high-dimensional, and sparse networks that lack node or edge covariates, we propose a novel contrastive learning-based statistical approach for network edge embedding, which we name as Adaptive Contrastive Edge Representation Learning (ACERL). It builds on two key components: contrastive learning of augmented network pairs, and a data-driven adaptive random masking mechanism. We establish the non-asymptotic error bounds, and show that our method achieves the minimax optimal convergence rate for edge representation learning. We further demonstrate the applicability of the learned representation in multiple downstream tasks, including network classification, important edge detection, and community detection, and establish the corresponding theoretical guarantees. We validate our method through both synthetic data and real brain connectivities studies, and show its competitive performance compared to the baseline method of sparse principal components analysis.

Page Count
53 pages

Category
Statistics:
Machine Learning (Stat)