Negative Metric Learning for Graphs
By: Yiyang Zhao , Chengpei Wu , Lilin Zhang and more
Potential Business Impact:
Teaches computers to learn better from data.
Graph contrastive learning (GCL) often suffers from false negatives, which degrades the performance on downstream tasks. The existing methods addressing the false negative issue usually rely on human prior knowledge, still leading GCL to suboptimal results. In this paper, we propose a novel Negative Metric Learning (NML) enhanced GCL (NML-GCL). NML-GCL employs a learnable Negative Metric Network (NMN) to build a negative metric space, in which false negatives can be distinguished better from true negatives based on their distance to anchor node. To overcome the lack of explicit supervision signals for NML, we propose a joint training scheme with bi-level optimization objective, which implicitly utilizes the self-supervision signals to iteratively optimize the encoder and the negative metric network. The solid theoretical analysis and the extensive experiments conducted on widely used benchmarks verify the superiority of the proposed method.
Similar Papers
Does GCL Need a Large Number of Negative Samples? Enhancing Graph Contrastive Learning with Effective and Efficient Negative Sampling
Machine Learning (CS)
Teaches computers to understand connections better, faster.
NLGCL: Naturally Existing Neighbor Layers Graph Contrastive Learning for Recommendation
Information Retrieval
Recommends better by learning from nearby friends.
Model-Driven Graph Contrastive Learning
Machine Learning (CS)
Teaches computers to understand data better.