NLGCL: Naturally Existing Neighbor Layers Graph Contrastive Learning for Recommendation
By: Jinfeng Xu , Zheyu Chen , Shuo Yang and more
Potential Business Impact:
Recommends better by learning from nearby friends.
Graph Neural Networks (GNNs) are widely used in collaborative filtering to capture high-order user-item relationships. To address the data sparsity problem in recommendation systems, Graph Contrastive Learning (GCL) has emerged as a promising paradigm that maximizes mutual information between contrastive views. However, existing GCL methods rely on augmentation techniques that introduce semantically irrelevant noise and incur significant computational and storage costs, limiting effectiveness and efficiency. To overcome these challenges, we propose NLGCL, a novel contrastive learning framework that leverages naturally contrastive views between neighbor layers within GNNs. By treating each node and its neighbors in the next layer as positive pairs, and other nodes as negatives, NLGCL avoids augmentation-based noise while preserving semantic relevance. This paradigm eliminates costly view construction and storage, making it computationally efficient and practical for real-world scenarios. Extensive experiments on four public datasets demonstrate that NLGCL outperforms state-of-the-art baselines in effectiveness and efficiency.
Similar Papers
Unveiling Contrastive Learning's Capability of Neighborhood Aggregation for Collaborative Filtering
Information Retrieval
Makes movie suggestions better and faster.
Diffusion-augmented Graph Contrastive Learning for Collaborative Filter
Information Retrieval
Helps movie apps suggest better films for you.
HyperGCL: Multi-Modal Graph Contrastive Learning via Learnable Hypergraph Views
Machine Learning (CS)
Makes computers understand complex data better.